00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2373 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3638 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.158 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.159 The recommended git tool is: git 00:00:00.159 using credential 00000000-0000-0000-0000-000000000002 00:00:00.160 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.197 Fetching changes from the remote Git repository 00:00:00.200 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.237 Using shallow fetch with depth 1 00:00:00.237 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.237 > git --version # timeout=10 00:00:00.272 > git --version # 'git version 2.39.2' 00:00:00.272 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.290 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.290 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.580 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.593 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.604 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.604 > git config core.sparsecheckout # timeout=10 00:00:06.615 > git read-tree -mu HEAD # timeout=10 00:00:06.631 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.647 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.647 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.732 [Pipeline] Start of Pipeline 00:00:06.746 [Pipeline] library 00:00:06.748 Loading library shm_lib@master 00:00:06.748 Library shm_lib@master is cached. Copying from home. 00:00:06.767 [Pipeline] node 00:00:06.790 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.792 [Pipeline] { 00:00:06.802 [Pipeline] catchError 00:00:06.803 [Pipeline] { 00:00:06.820 [Pipeline] wrap 00:00:06.831 [Pipeline] { 00:00:06.840 [Pipeline] stage 00:00:06.842 [Pipeline] { (Prologue) 00:00:07.058 [Pipeline] sh 00:00:07.856 + logger -p user.info -t JENKINS-CI 00:00:07.887 [Pipeline] echo 00:00:07.888 Node: GP11 00:00:07.895 [Pipeline] sh 00:00:08.242 [Pipeline] setCustomBuildProperty 00:00:08.253 [Pipeline] echo 00:00:08.255 Cleanup processes 00:00:08.259 [Pipeline] sh 00:00:08.552 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.552 4683 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.565 [Pipeline] sh 00:00:08.858 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.858 ++ grep -v 'sudo pgrep' 00:00:08.858 ++ awk '{print $1}' 00:00:08.858 + sudo kill -9 00:00:08.858 + true 00:00:08.880 [Pipeline] cleanWs 00:00:08.891 [WS-CLEANUP] Deleting project workspace... 00:00:08.891 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.906 [WS-CLEANUP] done 00:00:08.911 [Pipeline] setCustomBuildProperty 00:00:08.926 [Pipeline] sh 00:00:09.221 + sudo git config --global --replace-all safe.directory '*' 00:00:09.313 [Pipeline] httpRequest 00:00:11.197 [Pipeline] echo 00:00:11.199 Sorcerer 10.211.164.20 is alive 00:00:11.208 [Pipeline] retry 00:00:11.210 [Pipeline] { 00:00:11.223 [Pipeline] httpRequest 00:00:11.228 HttpMethod: GET 00:00:11.228 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.229 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:11.234 Response Code: HTTP/1.1 200 OK 00:00:11.234 Success: Status code 200 is in the accepted range: 200,404 00:00:11.235 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.353 [Pipeline] } 00:00:12.373 [Pipeline] // retry 00:00:12.380 [Pipeline] sh 00:00:12.679 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.696 [Pipeline] httpRequest 00:00:13.229 [Pipeline] echo 00:00:13.231 Sorcerer 10.211.164.20 is alive 00:00:13.241 [Pipeline] retry 00:00:13.243 [Pipeline] { 00:00:13.257 [Pipeline] httpRequest 00:00:13.263 HttpMethod: GET 00:00:13.263 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:13.264 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:13.279 Response Code: HTTP/1.1 200 OK 00:00:13.279 Success: Status code 200 is in the accepted range: 200,404 00:00:13.280 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:42.684 [Pipeline] } 00:01:42.701 [Pipeline] // retry 00:01:42.709 [Pipeline] sh 00:01:43.007 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:45.554 [Pipeline] sh 00:01:45.837 + git -C spdk log --oneline -n5 00:01:45.837 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:45.837 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:45.837 4bcab9fb9 correct kick for CQ full case 00:01:45.837 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:45.837 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:45.857 [Pipeline] withCredentials 00:01:45.868 > git --version # timeout=10 00:01:45.879 > git --version # 'git version 2.39.2' 00:01:45.900 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:45.902 [Pipeline] { 00:01:45.910 [Pipeline] retry 00:01:45.912 [Pipeline] { 00:01:45.927 [Pipeline] sh 00:01:46.467 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:46.479 [Pipeline] } 00:01:46.497 [Pipeline] // retry 00:01:46.501 [Pipeline] } 00:01:46.516 [Pipeline] // withCredentials 00:01:46.525 [Pipeline] httpRequest 00:01:46.997 [Pipeline] echo 00:01:46.999 Sorcerer 10.211.164.20 is alive 00:01:47.008 [Pipeline] retry 00:01:47.010 [Pipeline] { 00:01:47.025 [Pipeline] httpRequest 00:01:47.031 HttpMethod: GET 00:01:47.031 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:47.033 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:47.047 Response Code: HTTP/1.1 200 OK 00:01:47.047 Success: Status code 200 is in the accepted range: 200,404 00:01:47.048 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.585 [Pipeline] } 00:02:01.601 [Pipeline] // retry 00:02:01.608 [Pipeline] sh 00:02:01.898 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.821 [Pipeline] sh 00:02:04.117 + git -C dpdk log --oneline -n5 00:02:04.117 caf0f5d395 version: 22.11.4 00:02:04.117 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:04.117 dc9c799c7d vhost: fix missing spinlock unlock 00:02:04.117 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:04.117 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:04.129 [Pipeline] } 00:02:04.144 [Pipeline] // stage 00:02:04.152 [Pipeline] stage 00:02:04.154 [Pipeline] { (Prepare) 00:02:04.173 [Pipeline] writeFile 00:02:04.189 [Pipeline] sh 00:02:04.481 + logger -p user.info -t JENKINS-CI 00:02:04.496 [Pipeline] sh 00:02:04.785 + logger -p user.info -t JENKINS-CI 00:02:04.798 [Pipeline] sh 00:02:05.087 + cat autorun-spdk.conf 00:02:05.087 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.087 SPDK_TEST_NVMF=1 00:02:05.087 SPDK_TEST_NVME_CLI=1 00:02:05.087 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.087 SPDK_TEST_NVMF_NICS=e810 00:02:05.087 SPDK_TEST_VFIOUSER=1 00:02:05.087 SPDK_RUN_UBSAN=1 00:02:05.087 NET_TYPE=phy 00:02:05.087 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.087 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.095 RUN_NIGHTLY=1 00:02:05.100 [Pipeline] readFile 00:02:05.137 [Pipeline] withEnv 00:02:05.139 [Pipeline] { 00:02:05.151 [Pipeline] sh 00:02:05.441 + set -ex 00:02:05.441 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:05.441 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.441 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.441 ++ SPDK_TEST_NVMF=1 00:02:05.441 ++ SPDK_TEST_NVME_CLI=1 00:02:05.441 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.441 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.441 ++ SPDK_TEST_VFIOUSER=1 00:02:05.441 ++ SPDK_RUN_UBSAN=1 00:02:05.441 ++ NET_TYPE=phy 00:02:05.441 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.441 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.441 ++ RUN_NIGHTLY=1 00:02:05.441 + case $SPDK_TEST_NVMF_NICS in 00:02:05.441 + DRIVERS=ice 00:02:05.441 + [[ tcp == \r\d\m\a ]] 00:02:05.441 + [[ -n ice ]] 00:02:05.441 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:05.441 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:08.745 rmmod: ERROR: Module irdma is not currently loaded 00:02:08.745 rmmod: ERROR: Module i40iw is not currently loaded 00:02:08.745 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:08.745 + true 00:02:08.745 + for D in $DRIVERS 00:02:08.745 + sudo modprobe ice 00:02:08.745 + exit 0 00:02:08.756 [Pipeline] } 00:02:08.772 [Pipeline] // withEnv 00:02:08.776 [Pipeline] } 00:02:08.788 [Pipeline] // stage 00:02:08.796 [Pipeline] catchError 00:02:08.797 [Pipeline] { 00:02:08.808 [Pipeline] timeout 00:02:08.809 Timeout set to expire in 1 hr 0 min 00:02:08.810 [Pipeline] { 00:02:08.822 [Pipeline] stage 00:02:08.824 [Pipeline] { (Tests) 00:02:08.836 [Pipeline] sh 00:02:09.126 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.126 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.126 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.126 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:09.126 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.126 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:09.127 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:09.127 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:09.127 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:09.127 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:09.127 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:09.127 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:09.127 + source /etc/os-release 00:02:09.127 ++ NAME='Fedora Linux' 00:02:09.127 ++ VERSION='39 (Cloud Edition)' 00:02:09.127 ++ ID=fedora 00:02:09.127 ++ VERSION_ID=39 00:02:09.127 ++ VERSION_CODENAME= 00:02:09.127 ++ PLATFORM_ID=platform:f39 00:02:09.127 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:09.127 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:09.127 ++ LOGO=fedora-logo-icon 00:02:09.127 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:09.127 ++ HOME_URL=https://fedoraproject.org/ 00:02:09.127 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:09.127 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:09.127 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:09.127 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:09.127 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:09.127 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:09.127 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:09.127 ++ SUPPORT_END=2024-11-12 00:02:09.127 ++ VARIANT='Cloud Edition' 00:02:09.127 ++ VARIANT_ID=cloud 00:02:09.127 + uname -a 00:02:09.127 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:09.127 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:10.064 Hugepages 00:02:10.064 node hugesize free / total 00:02:10.064 node0 1048576kB 0 / 0 00:02:10.064 node0 2048kB 0 / 0 00:02:10.064 node1 1048576kB 0 / 0 00:02:10.064 node1 2048kB 0 / 0 00:02:10.064 00:02:10.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.323 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:10.323 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:10.323 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:10.323 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:10.323 + rm -f /tmp/spdk-ld-path 00:02:10.323 + source autorun-spdk.conf 00:02:10.323 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.323 ++ SPDK_TEST_NVMF=1 00:02:10.323 ++ SPDK_TEST_NVME_CLI=1 00:02:10.323 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.323 ++ SPDK_TEST_NVMF_NICS=e810 00:02:10.323 ++ SPDK_TEST_VFIOUSER=1 00:02:10.323 ++ SPDK_RUN_UBSAN=1 00:02:10.323 ++ NET_TYPE=phy 00:02:10.323 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.323 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.323 ++ RUN_NIGHTLY=1 00:02:10.323 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.323 + [[ -n '' ]] 00:02:10.323 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.323 + for M in /var/spdk/build-*-manifest.txt 00:02:10.323 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:10.324 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.324 + for M in /var/spdk/build-*-manifest.txt 00:02:10.324 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.324 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.324 + for M in /var/spdk/build-*-manifest.txt 00:02:10.324 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.324 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:10.324 ++ uname 00:02:10.324 + [[ Linux == \L\i\n\u\x ]] 00:02:10.324 + sudo dmesg -T 00:02:10.324 + sudo dmesg --clear 00:02:10.324 + dmesg_pid=5428 00:02:10.324 + [[ Fedora Linux == FreeBSD ]] 00:02:10.324 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.324 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.324 + sudo dmesg -Tw 00:02:10.324 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.324 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.324 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.324 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.324 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.324 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.324 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.324 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.324 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.324 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.324 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.324 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.324 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.324 10:56:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:10.324 10:56:34 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.324 10:56:34 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:10.324 10:56:34 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:10.324 10:56:34 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:10.583 10:56:34 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:10.583 10:56:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:10.583 10:56:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:10.583 10:56:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.583 10:56:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.583 10:56:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.583 10:56:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.583 10:56:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.583 10:56:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.583 10:56:34 -- paths/export.sh@5 -- $ export PATH 00:02:10.583 10:56:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.583 10:56:34 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.583 10:56:34 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:10.583 10:56:34 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731837394.XXXXXX 00:02:10.583 10:56:35 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731837394.X4YnpE 00:02:10.583 10:56:35 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:10.583 10:56:35 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:10.583 10:56:35 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:10.583 10:56:35 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:10.583 10:56:35 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.584 10:56:35 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.584 10:56:35 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:10.584 10:56:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:10.584 10:56:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.584 10:56:35 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:10.584 10:56:35 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:10.584 10:56:35 -- pm/common@17 -- $ local monitor 00:02:10.584 10:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.584 10:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.584 10:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.584 10:56:35 -- pm/common@21 -- $ date +%s 00:02:10.584 10:56:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.584 10:56:35 -- pm/common@21 -- $ date +%s 00:02:10.584 10:56:35 -- pm/common@25 -- $ sleep 1 00:02:10.584 10:56:35 -- pm/common@21 -- $ date +%s 00:02:10.584 10:56:35 -- pm/common@21 -- $ date +%s 00:02:10.584 10:56:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731837395 00:02:10.584 10:56:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731837395 00:02:10.584 10:56:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731837395 00:02:10.584 10:56:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731837395 00:02:10.584 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731837395_collect-vmstat.pm.log 00:02:10.584 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731837395_collect-cpu-load.pm.log 00:02:10.584 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731837395_collect-cpu-temp.pm.log 00:02:10.584 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731837395_collect-bmc-pm.bmc.pm.log 00:02:11.528 10:56:36 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:11.528 10:56:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.528 10:56:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.528 10:56:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.528 10:56:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.528 Sun Nov 17 09:56:36 AM UTC 2024 00:02:11.528 10:56:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.528 v25.01-pre-189-g83e8405e4 00:02:11.528 10:56:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.528 10:56:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.528 10:56:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.528 10:56:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:11.528 10:56:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.528 10:56:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.528 ************************************ 00:02:11.528 START TEST ubsan 00:02:11.528 ************************************ 00:02:11.528 10:56:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:11.528 using ubsan 00:02:11.528 00:02:11.528 real 0m0.000s 00:02:11.528 user 0m0.000s 00:02:11.528 sys 0m0.000s 00:02:11.529 10:56:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:11.529 10:56:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.529 ************************************ 00:02:11.529 END TEST ubsan 00:02:11.529 ************************************ 00:02:11.529 10:56:36 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:11.529 10:56:36 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:11.529 10:56:36 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:11.529 10:56:36 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:11.529 10:56:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:11.529 10:56:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.529 ************************************ 00:02:11.529 START TEST build_native_dpdk 00:02:11.529 ************************************ 00:02:11.529 10:56:36 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:11.529 caf0f5d395 version: 22.11.4 00:02:11.529 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:11.529 dc9c799c7d vhost: fix missing spinlock unlock 00:02:11.529 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:11.529 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.529 10:56:36 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:11.529 10:56:36 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:11.789 patching file config/rte_config.h 00:02:11.789 Hunk #1 succeeded at 60 (offset 1 line). 00:02:11.789 10:56:36 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:11.789 10:56:36 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:11.789 patching file lib/pcapng/rte_pcapng.c 00:02:11.789 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:11.789 10:56:36 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:11.789 10:56:36 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:11.790 10:56:36 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:11.790 10:56:36 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:11.790 10:56:36 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:11.790 10:56:36 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:11.790 10:56:36 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:18.358 The Meson build system 00:02:18.358 Version: 1.5.0 00:02:18.358 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:18.358 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:18.358 Build type: native build 00:02:18.358 Program cat found: YES (/usr/bin/cat) 00:02:18.358 Project name: DPDK 00:02:18.358 Project version: 22.11.4 00:02:18.358 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:18.358 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:18.358 Host machine cpu family: x86_64 00:02:18.358 Host machine cpu: x86_64 00:02:18.358 Message: ## Building in Developer Mode ## 00:02:18.358 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:18.358 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:18.358 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.358 Program objdump found: YES (/usr/bin/objdump) 00:02:18.358 Program python3 found: YES (/usr/bin/python3) 00:02:18.358 Program cat found: YES (/usr/bin/cat) 00:02:18.358 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:18.358 Checking for size of "void *" : 8 00:02:18.358 Checking for size of "void *" : 8 (cached) 00:02:18.358 Library m found: YES 00:02:18.358 Library numa found: YES 00:02:18.358 Has header "numaif.h" : YES 00:02:18.358 Library fdt found: NO 00:02:18.358 Library execinfo found: NO 00:02:18.358 Has header "execinfo.h" : YES 00:02:18.358 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:18.358 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.358 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.358 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.358 Run-time dependency openssl found: YES 3.1.1 00:02:18.358 Run-time dependency libpcap found: YES 1.10.4 00:02:18.358 Has header "pcap.h" with dependency libpcap: YES 00:02:18.358 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.358 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.358 Compiler for C supports arguments -Wformat: YES 00:02:18.358 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.358 Compiler for C supports arguments -Wformat-security: NO 00:02:18.358 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.358 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.358 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.358 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.358 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.358 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.358 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.358 Compiler for C supports arguments -Wundef: YES 00:02:18.358 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.358 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.358 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.358 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.358 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.358 Compiler for C supports arguments -mavx512f: YES 00:02:18.358 Checking if "AVX512 checking" compiles: YES 00:02:18.358 Fetching value of define "__SSE4_2__" : 1 00:02:18.358 Fetching value of define "__AES__" : 1 00:02:18.358 Fetching value of define "__AVX__" : 1 00:02:18.358 Fetching value of define "__AVX2__" : (undefined) 00:02:18.358 Fetching value of define "__AVX512BW__" : (undefined) 00:02:18.358 Fetching value of define "__AVX512CD__" : (undefined) 00:02:18.358 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:18.358 Fetching value of define "__AVX512F__" : (undefined) 00:02:18.358 Fetching value of define "__AVX512VL__" : (undefined) 00:02:18.358 Fetching value of define "__PCLMUL__" : 1 00:02:18.358 Fetching value of define "__RDRND__" : 1 00:02:18.358 Fetching value of define "__RDSEED__" : (undefined) 00:02:18.358 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:18.358 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.358 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.358 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.358 Checking for function "getentropy" : YES 00:02:18.358 Message: lib/eal: Defining dependency "eal" 00:02:18.358 Message: lib/ring: Defining dependency "ring" 00:02:18.358 Message: lib/rcu: Defining dependency "rcu" 00:02:18.358 Message: lib/mempool: Defining dependency "mempool" 00:02:18.358 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.358 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.358 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.358 Compiler for C supports arguments -mpclmul: YES 00:02:18.358 Compiler for C supports arguments -maes: YES 00:02:18.358 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.358 Compiler for C supports arguments -mavx512bw: YES 00:02:18.358 Compiler for C supports arguments -mavx512dq: YES 00:02:18.358 Compiler for C supports arguments -mavx512vl: YES 00:02:18.358 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.358 Compiler for C supports arguments -mavx2: YES 00:02:18.358 Compiler for C supports arguments -mavx: YES 00:02:18.358 Message: lib/net: Defining dependency "net" 00:02:18.358 Message: lib/meter: Defining dependency "meter" 00:02:18.358 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.358 Message: lib/pci: Defining dependency "pci" 00:02:18.358 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.358 Message: lib/metrics: Defining dependency "metrics" 00:02:18.358 Message: lib/hash: Defining dependency "hash" 00:02:18.358 Message: lib/timer: Defining dependency "timer" 00:02:18.358 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:18.358 Compiler for C supports arguments -mavx2: YES (cached) 00:02:18.358 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.358 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:18.358 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:18.358 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:18.358 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:18.358 Message: lib/acl: Defining dependency "acl" 00:02:18.358 Message: lib/bbdev: Defining dependency "bbdev" 00:02:18.358 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:18.358 Run-time dependency libelf found: YES 0.191 00:02:18.358 Message: lib/bpf: Defining dependency "bpf" 00:02:18.358 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:18.358 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.358 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.358 Message: lib/distributor: Defining dependency "distributor" 00:02:18.358 Message: lib/efd: Defining dependency "efd" 00:02:18.358 Message: lib/eventdev: Defining dependency "eventdev" 00:02:18.358 Message: lib/gpudev: Defining dependency "gpudev" 00:02:18.358 Message: lib/gro: Defining dependency "gro" 00:02:18.358 Message: lib/gso: Defining dependency "gso" 00:02:18.358 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:18.358 Message: lib/jobstats: Defining dependency "jobstats" 00:02:18.358 Message: lib/latencystats: Defining dependency "latencystats" 00:02:18.358 Message: lib/lpm: Defining dependency "lpm" 00:02:18.358 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.358 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:18.358 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:18.358 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:18.358 Message: lib/member: Defining dependency "member" 00:02:18.358 Message: lib/pcapng: Defining dependency "pcapng" 00:02:18.358 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.358 Message: lib/power: Defining dependency "power" 00:02:18.358 Message: lib/rawdev: Defining dependency "rawdev" 00:02:18.358 Message: lib/regexdev: Defining dependency "regexdev" 00:02:18.358 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.358 Message: lib/rib: Defining dependency "rib" 00:02:18.358 Message: lib/reorder: Defining dependency "reorder" 00:02:18.358 Message: lib/sched: Defining dependency "sched" 00:02:18.358 Message: lib/security: Defining dependency "security" 00:02:18.358 Message: lib/stack: Defining dependency "stack" 00:02:18.358 Has header "linux/userfaultfd.h" : YES 00:02:18.358 Message: lib/vhost: Defining dependency "vhost" 00:02:18.358 Message: lib/ipsec: Defining dependency "ipsec" 00:02:18.358 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.358 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:18.358 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:18.359 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:18.359 Message: lib/fib: Defining dependency "fib" 00:02:18.359 Message: lib/port: Defining dependency "port" 00:02:18.359 Message: lib/pdump: Defining dependency "pdump" 00:02:18.359 Message: lib/table: Defining dependency "table" 00:02:18.359 Message: lib/pipeline: Defining dependency "pipeline" 00:02:18.359 Message: lib/graph: Defining dependency "graph" 00:02:18.359 Message: lib/node: Defining dependency "node" 00:02:18.359 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.359 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.359 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.359 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.359 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:18.359 Compiler for C supports arguments -Wno-unused-value: YES 00:02:19.742 Compiler for C supports arguments -Wno-format: YES 00:02:19.742 Compiler for C supports arguments -Wno-format-security: YES 00:02:19.742 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:19.742 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:19.742 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:19.742 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:19.742 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:19.742 Compiler for C supports arguments -mavx2: YES (cached) 00:02:19.742 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.742 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.742 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.742 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:19.742 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:19.742 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:19.742 Configuring doxy-api.conf using configuration 00:02:19.742 Program sphinx-build found: NO 00:02:19.742 Configuring rte_build_config.h using configuration 00:02:19.742 Message: 00:02:19.742 ================= 00:02:19.742 Applications Enabled 00:02:19.742 ================= 00:02:19.742 00:02:19.742 apps: 00:02:19.742 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:19.742 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:19.742 test-security-perf, 00:02:19.742 00:02:19.742 Message: 00:02:19.742 ================= 00:02:19.742 Libraries Enabled 00:02:19.742 ================= 00:02:19.742 00:02:19.742 libs: 00:02:19.742 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:19.742 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:19.742 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:19.742 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:19.742 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:19.742 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:19.742 table, pipeline, graph, node, 00:02:19.742 00:02:19.742 Message: 00:02:19.742 =============== 00:02:19.742 Drivers Enabled 00:02:19.742 =============== 00:02:19.742 00:02:19.742 common: 00:02:19.742 00:02:19.742 bus: 00:02:19.742 pci, vdev, 00:02:19.742 mempool: 00:02:19.742 ring, 00:02:19.742 dma: 00:02:19.742 00:02:19.742 net: 00:02:19.742 i40e, 00:02:19.742 raw: 00:02:19.742 00:02:19.742 crypto: 00:02:19.742 00:02:19.742 compress: 00:02:19.742 00:02:19.742 regex: 00:02:19.742 00:02:19.742 vdpa: 00:02:19.742 00:02:19.742 event: 00:02:19.742 00:02:19.742 baseband: 00:02:19.742 00:02:19.742 gpu: 00:02:19.742 00:02:19.742 00:02:19.742 Message: 00:02:19.742 ================= 00:02:19.742 Content Skipped 00:02:19.742 ================= 00:02:19.742 00:02:19.742 apps: 00:02:19.742 00:02:19.742 libs: 00:02:19.742 kni: explicitly disabled via build config (deprecated lib) 00:02:19.742 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:19.742 00:02:19.742 drivers: 00:02:19.742 common/cpt: not in enabled drivers build config 00:02:19.742 common/dpaax: not in enabled drivers build config 00:02:19.742 common/iavf: not in enabled drivers build config 00:02:19.742 common/idpf: not in enabled drivers build config 00:02:19.742 common/mvep: not in enabled drivers build config 00:02:19.742 common/octeontx: not in enabled drivers build config 00:02:19.742 bus/auxiliary: not in enabled drivers build config 00:02:19.742 bus/dpaa: not in enabled drivers build config 00:02:19.742 bus/fslmc: not in enabled drivers build config 00:02:19.742 bus/ifpga: not in enabled drivers build config 00:02:19.742 bus/vmbus: not in enabled drivers build config 00:02:19.742 common/cnxk: not in enabled drivers build config 00:02:19.742 common/mlx5: not in enabled drivers build config 00:02:19.742 common/qat: not in enabled drivers build config 00:02:19.742 common/sfc_efx: not in enabled drivers build config 00:02:19.742 mempool/bucket: not in enabled drivers build config 00:02:19.742 mempool/cnxk: not in enabled drivers build config 00:02:19.742 mempool/dpaa: not in enabled drivers build config 00:02:19.742 mempool/dpaa2: not in enabled drivers build config 00:02:19.742 mempool/octeontx: not in enabled drivers build config 00:02:19.742 mempool/stack: not in enabled drivers build config 00:02:19.742 dma/cnxk: not in enabled drivers build config 00:02:19.742 dma/dpaa: not in enabled drivers build config 00:02:19.742 dma/dpaa2: not in enabled drivers build config 00:02:19.742 dma/hisilicon: not in enabled drivers build config 00:02:19.742 dma/idxd: not in enabled drivers build config 00:02:19.742 dma/ioat: not in enabled drivers build config 00:02:19.742 dma/skeleton: not in enabled drivers build config 00:02:19.742 net/af_packet: not in enabled drivers build config 00:02:19.742 net/af_xdp: not in enabled drivers build config 00:02:19.742 net/ark: not in enabled drivers build config 00:02:19.742 net/atlantic: not in enabled drivers build config 00:02:19.742 net/avp: not in enabled drivers build config 00:02:19.742 net/axgbe: not in enabled drivers build config 00:02:19.742 net/bnx2x: not in enabled drivers build config 00:02:19.742 net/bnxt: not in enabled drivers build config 00:02:19.742 net/bonding: not in enabled drivers build config 00:02:19.742 net/cnxk: not in enabled drivers build config 00:02:19.742 net/cxgbe: not in enabled drivers build config 00:02:19.742 net/dpaa: not in enabled drivers build config 00:02:19.742 net/dpaa2: not in enabled drivers build config 00:02:19.742 net/e1000: not in enabled drivers build config 00:02:19.742 net/ena: not in enabled drivers build config 00:02:19.742 net/enetc: not in enabled drivers build config 00:02:19.742 net/enetfec: not in enabled drivers build config 00:02:19.742 net/enic: not in enabled drivers build config 00:02:19.742 net/failsafe: not in enabled drivers build config 00:02:19.742 net/fm10k: not in enabled drivers build config 00:02:19.742 net/gve: not in enabled drivers build config 00:02:19.742 net/hinic: not in enabled drivers build config 00:02:19.742 net/hns3: not in enabled drivers build config 00:02:19.742 net/iavf: not in enabled drivers build config 00:02:19.742 net/ice: not in enabled drivers build config 00:02:19.742 net/idpf: not in enabled drivers build config 00:02:19.742 net/igc: not in enabled drivers build config 00:02:19.742 net/ionic: not in enabled drivers build config 00:02:19.742 net/ipn3ke: not in enabled drivers build config 00:02:19.742 net/ixgbe: not in enabled drivers build config 00:02:19.742 net/kni: not in enabled drivers build config 00:02:19.742 net/liquidio: not in enabled drivers build config 00:02:19.742 net/mana: not in enabled drivers build config 00:02:19.742 net/memif: not in enabled drivers build config 00:02:19.742 net/mlx4: not in enabled drivers build config 00:02:19.742 net/mlx5: not in enabled drivers build config 00:02:19.742 net/mvneta: not in enabled drivers build config 00:02:19.742 net/mvpp2: not in enabled drivers build config 00:02:19.742 net/netvsc: not in enabled drivers build config 00:02:19.742 net/nfb: not in enabled drivers build config 00:02:19.742 net/nfp: not in enabled drivers build config 00:02:19.742 net/ngbe: not in enabled drivers build config 00:02:19.742 net/null: not in enabled drivers build config 00:02:19.742 net/octeontx: not in enabled drivers build config 00:02:19.742 net/octeon_ep: not in enabled drivers build config 00:02:19.742 net/pcap: not in enabled drivers build config 00:02:19.742 net/pfe: not in enabled drivers build config 00:02:19.742 net/qede: not in enabled drivers build config 00:02:19.742 net/ring: not in enabled drivers build config 00:02:19.742 net/sfc: not in enabled drivers build config 00:02:19.742 net/softnic: not in enabled drivers build config 00:02:19.742 net/tap: not in enabled drivers build config 00:02:19.742 net/thunderx: not in enabled drivers build config 00:02:19.742 net/txgbe: not in enabled drivers build config 00:02:19.742 net/vdev_netvsc: not in enabled drivers build config 00:02:19.742 net/vhost: not in enabled drivers build config 00:02:19.742 net/virtio: not in enabled drivers build config 00:02:19.742 net/vmxnet3: not in enabled drivers build config 00:02:19.742 raw/cnxk_bphy: not in enabled drivers build config 00:02:19.742 raw/cnxk_gpio: not in enabled drivers build config 00:02:19.742 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:19.742 raw/ifpga: not in enabled drivers build config 00:02:19.742 raw/ntb: not in enabled drivers build config 00:02:19.742 raw/skeleton: not in enabled drivers build config 00:02:19.742 crypto/armv8: not in enabled drivers build config 00:02:19.742 crypto/bcmfs: not in enabled drivers build config 00:02:19.742 crypto/caam_jr: not in enabled drivers build config 00:02:19.742 crypto/ccp: not in enabled drivers build config 00:02:19.742 crypto/cnxk: not in enabled drivers build config 00:02:19.742 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.742 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.742 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.742 crypto/mlx5: not in enabled drivers build config 00:02:19.742 crypto/mvsam: not in enabled drivers build config 00:02:19.742 crypto/nitrox: not in enabled drivers build config 00:02:19.742 crypto/null: not in enabled drivers build config 00:02:19.742 crypto/octeontx: not in enabled drivers build config 00:02:19.742 crypto/openssl: not in enabled drivers build config 00:02:19.742 crypto/scheduler: not in enabled drivers build config 00:02:19.742 crypto/uadk: not in enabled drivers build config 00:02:19.742 crypto/virtio: not in enabled drivers build config 00:02:19.742 compress/isal: not in enabled drivers build config 00:02:19.742 compress/mlx5: not in enabled drivers build config 00:02:19.742 compress/octeontx: not in enabled drivers build config 00:02:19.742 compress/zlib: not in enabled drivers build config 00:02:19.742 regex/mlx5: not in enabled drivers build config 00:02:19.742 regex/cn9k: not in enabled drivers build config 00:02:19.742 vdpa/ifc: not in enabled drivers build config 00:02:19.742 vdpa/mlx5: not in enabled drivers build config 00:02:19.742 vdpa/sfc: not in enabled drivers build config 00:02:19.742 event/cnxk: not in enabled drivers build config 00:02:19.742 event/dlb2: not in enabled drivers build config 00:02:19.742 event/dpaa: not in enabled drivers build config 00:02:19.742 event/dpaa2: not in enabled drivers build config 00:02:19.742 event/dsw: not in enabled drivers build config 00:02:19.742 event/opdl: not in enabled drivers build config 00:02:19.742 event/skeleton: not in enabled drivers build config 00:02:19.742 event/sw: not in enabled drivers build config 00:02:19.742 event/octeontx: not in enabled drivers build config 00:02:19.742 baseband/acc: not in enabled drivers build config 00:02:19.742 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:19.742 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:19.742 baseband/la12xx: not in enabled drivers build config 00:02:19.742 baseband/null: not in enabled drivers build config 00:02:19.742 baseband/turbo_sw: not in enabled drivers build config 00:02:19.742 gpu/cuda: not in enabled drivers build config 00:02:19.742 00:02:19.742 00:02:19.742 Build targets in project: 316 00:02:19.742 00:02:19.742 DPDK 22.11.4 00:02:19.742 00:02:19.742 User defined options 00:02:19.742 libdir : lib 00:02:19.742 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:19.742 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:19.742 c_link_args : 00:02:19.742 enable_docs : false 00:02:19.742 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.742 enable_kmods : false 00:02:19.742 machine : native 00:02:19.742 tests : false 00:02:19.742 00:02:19.742 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.742 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:19.743 10:56:44 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:19.743 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:19.743 [1/745] Generating lib/rte_telemetry_mingw with a custom command 00:02:19.743 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:02:19.743 [3/745] Generating lib/rte_kvargs_def with a custom command 00:02:19.743 [4/745] Generating lib/rte_telemetry_def with a custom command 00:02:19.743 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.743 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.743 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.743 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:19.743 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.743 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:19.743 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.743 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.743 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.743 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.743 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:19.743 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.743 [17/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:19.743 [18/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.743 [19/745] Linking static target lib/librte_kvargs.a 00:02:19.743 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:20.008 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.008 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.008 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.008 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.008 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.008 [26/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.008 [27/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.008 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:20.008 [29/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.008 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.008 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.008 [32/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:20.008 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.008 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.008 [35/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.008 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.008 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.008 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.008 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.008 [40/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.008 [41/745] Generating lib/rte_eal_def with a custom command 00:02:20.008 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.008 [43/745] Generating lib/rte_eal_mingw with a custom command 00:02:20.008 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.008 [45/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.008 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:20.008 [47/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.008 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.008 [49/745] Generating lib/rte_ring_def with a custom command 00:02:20.008 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:20.008 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.008 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:20.008 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.008 [54/745] Generating lib/rte_ring_mingw with a custom command 00:02:20.008 [55/745] Generating lib/rte_rcu_def with a custom command 00:02:20.008 [56/745] Generating lib/rte_rcu_mingw with a custom command 00:02:20.008 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.008 [58/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:20.008 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.008 [60/745] Generating lib/rte_mempool_def with a custom command 00:02:20.008 [61/745] Generating lib/rte_mempool_mingw with a custom command 00:02:20.008 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:20.008 [63/745] Generating lib/rte_mbuf_def with a custom command 00:02:20.008 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:02:20.008 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:20.008 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:20.008 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.008 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.008 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:20.008 [70/745] Generating lib/rte_net_mingw with a custom command 00:02:20.008 [71/745] Generating lib/rte_meter_mingw with a custom command 00:02:20.008 [72/745] Generating lib/rte_meter_def with a custom command 00:02:20.272 [73/745] Generating lib/rte_net_def with a custom command 00:02:20.272 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.272 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.272 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.272 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.272 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.272 [79/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.272 [80/745] Linking static target lib/librte_ring.a 00:02:20.272 [81/745] Generating lib/rte_ethdev_def with a custom command 00:02:20.272 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.272 [83/745] Linking target lib/librte_kvargs.so.23.0 00:02:20.272 [84/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.272 [85/745] Generating lib/rte_ethdev_mingw with a custom command 00:02:20.534 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:20.534 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.534 [88/745] Linking static target lib/librte_meter.a 00:02:20.534 [89/745] Generating lib/rte_pci_def with a custom command 00:02:20.534 [90/745] Generating lib/rte_pci_mingw with a custom command 00:02:20.534 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.534 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.534 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.534 [94/745] Linking static target lib/librte_pci.a 00:02:20.534 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.534 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.534 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.800 [98/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.800 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.800 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.800 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.800 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.800 [103/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.800 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.800 [105/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:20.800 [106/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.800 [107/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.800 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.800 [109/745] Linking static target lib/librte_telemetry.a 00:02:20.800 [110/745] Generating lib/rte_cmdline_def with a custom command 00:02:20.800 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.800 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:02:20.800 [113/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.069 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.069 [115/745] Generating lib/rte_metrics_def with a custom command 00:02:21.069 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:02:21.069 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.069 [118/745] Generating lib/rte_hash_def with a custom command 00:02:21.069 [119/745] Generating lib/rte_hash_mingw with a custom command 00:02:21.069 [120/745] Generating lib/rte_timer_mingw with a custom command 00:02:21.069 [121/745] Generating lib/rte_timer_def with a custom command 00:02:21.069 [122/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:21.069 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.333 [124/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.333 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.333 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.333 [127/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.333 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.333 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.333 [130/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.333 [131/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.333 [132/745] Generating lib/rte_acl_mingw with a custom command 00:02:21.333 [133/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.333 [134/745] Generating lib/rte_acl_def with a custom command 00:02:21.333 [135/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.333 [136/745] Generating lib/rte_bbdev_def with a custom command 00:02:21.333 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:02:21.333 [138/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.333 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:02:21.333 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:02:21.333 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.333 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:21.333 [143/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.333 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:21.602 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.602 [146/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.602 [147/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.602 [148/745] Generating lib/rte_bpf_def with a custom command 00:02:21.602 [149/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.602 [150/745] Generating lib/rte_bpf_mingw with a custom command 00:02:21.602 [151/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.602 [152/745] Linking target lib/librte_telemetry.so.23.0 00:02:21.602 [153/745] Generating lib/rte_cfgfile_mingw with a custom command 00:02:21.602 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:02:21.602 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.602 [156/745] Generating lib/rte_compressdev_def with a custom command 00:02:21.602 [157/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.602 [158/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:21.602 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:21.602 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:21.868 [161/745] Generating lib/rte_compressdev_mingw with a custom command 00:02:21.868 [162/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.868 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:02:21.868 [164/745] Linking static target lib/librte_rcu.a 00:02:21.868 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.868 [166/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.868 [167/745] Generating lib/rte_cryptodev_mingw with a custom command 00:02:21.868 [168/745] Linking static target lib/librte_cmdline.a 00:02:21.868 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:21.868 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:21.868 [171/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.868 [172/745] Linking static target lib/librte_net.a 00:02:21.868 [173/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.868 [174/745] Linking static target lib/librte_timer.a 00:02:21.868 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:02:21.868 [176/745] Generating lib/rte_distributor_def with a custom command 00:02:21.868 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:21.868 [178/745] Generating lib/rte_efd_def with a custom command 00:02:21.868 [179/745] Generating lib/rte_efd_mingw with a custom command 00:02:22.140 [180/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:22.140 [181/745] Linking static target lib/librte_cfgfile.a 00:02:22.140 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.140 [183/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:22.140 [184/745] Linking static target lib/librte_mempool.a 00:02:22.140 [185/745] Linking static target lib/librte_metrics.a 00:02:22.402 [186/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.402 [187/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.402 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:22.402 [189/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.402 [190/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.402 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.402 [192/745] Linking static target lib/librte_eal.a 00:02:22.402 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:22.665 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:22.665 [195/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:22.665 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:22.665 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:22.665 [198/745] Linking static target lib/librte_bitratestats.a 00:02:22.665 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:22.665 [200/745] Generating lib/rte_eventdev_def with a custom command 00:02:22.665 [201/745] Generating lib/rte_eventdev_mingw with a custom command 00:02:22.666 [202/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.666 [203/745] Generating lib/rte_gpudev_def with a custom command 00:02:22.666 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:02:22.666 [205/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:22.666 [206/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.666 [207/745] Generating lib/rte_gro_def with a custom command 00:02:22.930 [208/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:22.930 [209/745] Generating lib/rte_gro_mingw with a custom command 00:02:22.930 [210/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.930 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:22.930 [212/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.930 [213/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.930 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:22.930 [215/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:23.193 [216/745] Generating lib/rte_gso_def with a custom command 00:02:23.193 [217/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:23.193 [218/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:23.193 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:23.193 [220/745] Generating lib/rte_gso_mingw with a custom command 00:02:23.193 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:23.193 [222/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:23.193 [223/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.193 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:23.193 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:23.193 [226/745] Linking static target lib/librte_bbdev.a 00:02:23.193 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:02:23.193 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:23.193 [229/745] Generating lib/rte_jobstats_def with a custom command 00:02:23.193 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.461 [231/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.461 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:23.461 [233/745] Generating lib/rte_latencystats_def with a custom command 00:02:23.461 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:23.461 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:23.461 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.461 [237/745] Linking static target lib/librte_compressdev.a 00:02:23.461 [238/745] Generating lib/rte_lpm_def with a custom command 00:02:23.461 [239/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:23.461 [240/745] Linking static target lib/librte_jobstats.a 00:02:23.461 [241/745] Generating lib/rte_lpm_mingw with a custom command 00:02:23.722 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:23.722 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:23.722 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.987 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:23.987 [246/745] Linking static target lib/librte_distributor.a 00:02:23.987 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:23.987 [248/745] Generating lib/rte_member_def with a custom command 00:02:23.987 [249/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.987 [250/745] Generating lib/rte_member_mingw with a custom command 00:02:23.987 [251/745] Generating lib/rte_pcapng_def with a custom command 00:02:23.987 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:23.987 [253/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:24.255 [254/745] Linking static target lib/librte_bpf.a 00:02:24.255 [255/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:24.255 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:24.255 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:24.255 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:24.255 [259/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.255 [260/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.255 [261/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.255 [262/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.255 [263/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:24.255 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:24.255 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:24.523 [266/745] Generating lib/rte_power_mingw with a custom command 00:02:24.523 [267/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:24.523 [268/745] Generating lib/rte_power_def with a custom command 00:02:24.523 [269/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:24.523 [270/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:24.523 [271/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.523 [272/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:24.523 [273/745] Linking static target lib/librte_gpudev.a 00:02:24.523 [274/745] Generating lib/rte_rawdev_def with a custom command 00:02:24.523 [275/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:24.523 [276/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:24.523 [277/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:24.523 [278/745] Generating lib/rte_regexdev_def with a custom command 00:02:24.523 [279/745] Linking static target lib/librte_gro.a 00:02:24.523 [280/745] Generating lib/rte_dmadev_def with a custom command 00:02:24.523 [281/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.523 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:24.523 [283/745] Generating lib/rte_rib_def with a custom command 00:02:24.523 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:24.788 [285/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.788 [286/745] Generating lib/rte_reorder_def with a custom command 00:02:24.788 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:24.788 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:02:24.788 [289/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:24.788 [290/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:24.788 [291/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:24.788 [292/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.788 [293/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:25.063 [294/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:25.063 [295/745] Linking static target lib/librte_latencystats.a 00:02:25.063 [296/745] Generating lib/rte_sched_def with a custom command 00:02:25.063 [297/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:25.063 [298/745] Generating lib/rte_sched_mingw with a custom command 00:02:25.063 [299/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.063 [300/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:25.063 [301/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:25.063 [302/745] Generating lib/rte_security_def with a custom command 00:02:25.064 [303/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:25.064 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:25.064 [305/745] Generating lib/rte_security_mingw with a custom command 00:02:25.064 [306/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:25.064 [307/745] Generating lib/rte_stack_mingw with a custom command 00:02:25.064 [308/745] Generating lib/rte_stack_def with a custom command 00:02:25.064 [309/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:25.064 [310/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:25.064 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:25.064 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.064 [313/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.064 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.064 [315/745] Linking static target lib/librte_rawdev.a 00:02:25.064 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:25.064 [317/745] Generating lib/rte_vhost_mingw with a custom command 00:02:25.064 [318/745] Generating lib/rte_vhost_def with a custom command 00:02:25.064 [319/745] Linking static target lib/librte_stack.a 00:02:25.328 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.329 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.329 [322/745] Linking static target lib/librte_dmadev.a 00:02:25.329 [323/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:25.329 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.329 [325/745] Linking static target lib/librte_ip_frag.a 00:02:25.329 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:25.593 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.593 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:25.593 [329/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:25.593 [330/745] Generating lib/rte_ipsec_def with a custom command 00:02:25.593 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:25.593 [332/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.593 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:25.857 [334/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.857 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.857 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.857 [337/745] Generating lib/rte_fib_def with a custom command 00:02:25.857 [338/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.857 [339/745] Generating lib/rte_fib_mingw with a custom command 00:02:25.857 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.857 [341/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:25.857 [342/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.857 [343/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:25.857 [344/745] Linking static target lib/librte_efd.a 00:02:25.857 [345/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.857 [346/745] Linking static target lib/librte_regexdev.a 00:02:26.120 [347/745] Linking static target lib/librte_gso.a 00:02:26.120 [348/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:26.120 [349/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.383 [350/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:26.383 [351/745] Linking static target lib/librte_pcapng.a 00:02:26.383 [352/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.383 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.383 [354/745] Linking static target lib/librte_lpm.a 00:02:26.383 [355/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.383 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.383 [357/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:26.383 [358/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:26.648 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.648 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.648 [361/745] Linking static target lib/librte_reorder.a 00:02:26.648 [362/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.648 [363/745] Generating lib/rte_port_def with a custom command 00:02:26.648 [364/745] Generating lib/rte_port_mingw with a custom command 00:02:26.648 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:26.913 [366/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.913 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:26.913 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:02:26.913 [369/745] Generating lib/rte_pdump_def with a custom command 00:02:26.913 [370/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:26.913 [371/745] Generating lib/rte_pdump_mingw with a custom command 00:02:26.913 [372/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:26.913 [373/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:26.913 [374/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:26.913 [375/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:26.913 [376/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:26.913 [377/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:26.913 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.913 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.913 [380/745] Linking static target lib/librte_security.a 00:02:26.913 [381/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.913 [382/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:27.179 [383/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.179 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.179 [385/745] Linking static target lib/librte_power.a 00:02:27.179 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.179 [387/745] Linking static target lib/librte_hash.a 00:02:27.179 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:27.179 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.179 [390/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:27.443 [391/745] Linking static target lib/acl/libavx512_tmp.a 00:02:27.443 [392/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:27.443 [393/745] Linking static target lib/librte_acl.a 00:02:27.443 [394/745] Linking static target lib/librte_rib.a 00:02:27.443 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:27.443 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:27.715 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:27.715 [398/745] Generating lib/rte_table_def with a custom command 00:02:27.715 [399/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.715 [400/745] Generating lib/rte_table_mingw with a custom command 00:02:27.715 [401/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.715 [402/745] Linking static target lib/librte_ethdev.a 00:02:27.979 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.979 [404/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.979 [405/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:27.979 [406/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.979 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:28.245 [408/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.245 [409/745] Linking static target lib/librte_mbuf.a 00:02:28.245 [410/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:28.245 [411/745] Generating lib/rte_pipeline_def with a custom command 00:02:28.245 [412/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:28.245 [413/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:28.245 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:28.245 [415/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:28.245 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:28.245 [417/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:28.245 [418/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.245 [419/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:28.245 [420/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:28.245 [421/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:28.245 [422/745] Generating lib/rte_graph_def with a custom command 00:02:28.245 [423/745] Generating lib/rte_graph_mingw with a custom command 00:02:28.245 [424/745] Linking static target lib/librte_fib.a 00:02:28.507 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:28.507 [426/745] Linking static target lib/librte_member.a 00:02:28.507 [427/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:28.507 [428/745] Linking static target lib/librte_eventdev.a 00:02:28.507 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:28.507 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:28.507 [431/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:28.507 [432/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:28.771 [433/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:28.771 [434/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.771 [435/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:28.771 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:28.771 [437/745] Generating lib/rte_node_def with a custom command 00:02:28.771 [438/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:28.771 [439/745] Generating lib/rte_node_mingw with a custom command 00:02:28.771 [440/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:28.771 [441/745] Linking static target lib/librte_sched.a 00:02:28.771 [442/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.038 [443/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.038 [444/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:29.038 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.038 [446/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:29.038 [447/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:29.038 [448/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:29.038 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:29.038 [450/745] Linking static target lib/librte_cryptodev.a 00:02:29.038 [451/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.038 [452/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:29.038 [453/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:29.038 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:29.038 [455/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:29.038 [456/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.038 [457/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:29.300 [458/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:29.300 [459/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:29.300 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:29.300 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:29.300 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.300 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:29.300 [464/745] Linking static target lib/librte_pdump.a 00:02:29.300 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:29.300 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:29.565 [467/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.565 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:29.565 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:29.565 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.565 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.565 [472/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:29.565 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:29.565 [474/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.565 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:29.565 [476/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.833 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:29.833 [478/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:29.833 [479/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:29.833 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:29.833 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:29.833 [482/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:29.833 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:29.833 [484/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.833 [485/745] Linking static target lib/librte_table.a 00:02:29.833 [486/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:30.095 [487/745] Linking static target lib/librte_ipsec.a 00:02:30.095 [488/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.095 [489/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.095 [490/745] Linking static target drivers/librte_bus_vdev.a 00:02:30.359 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:30.359 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.359 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.359 [494/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:30.359 [495/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.359 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:30.624 [497/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.625 [498/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:30.625 [499/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.625 [500/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:30.625 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:30.625 [502/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:30.625 [503/745] Linking static target lib/librte_graph.a 00:02:30.625 [504/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:30.625 [505/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:30.625 [506/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:30.625 [507/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:30.625 [508/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:30.890 [509/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.890 [510/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.890 [511/745] Linking static target drivers/librte_bus_pci.a 00:02:30.890 [512/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:30.890 [513/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.156 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.156 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:31.423 [516/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:31.423 [517/745] Linking static target lib/librte_port.a 00:02:31.423 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:31.423 [519/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.689 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:31.689 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:31.689 [522/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:31.689 [523/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.955 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:31.955 [525/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:31.955 [526/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:31.955 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:32.220 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.220 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:32.220 [530/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:32.220 [531/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:32.220 [532/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.220 [533/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.220 [534/745] Linking static target drivers/librte_mempool_ring.a 00:02:32.220 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:32.220 [536/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.220 [537/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.490 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:32.490 [539/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:32.490 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:32.754 [541/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:32.754 [542/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.021 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:33.021 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:33.021 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:33.284 [546/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:33.284 [547/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:33.284 [548/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:33.284 [549/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:33.284 [550/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:33.284 [551/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:33.549 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:33.549 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:33.816 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:33.816 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:33.816 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:34.085 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:34.085 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:34.085 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:34.353 [560/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:34.353 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:34.353 [562/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:34.353 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:34.617 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:34.617 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:34.617 [566/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:34.617 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:34.617 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:34.617 [569/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:34.881 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:34.881 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:34.881 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:34.881 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:34.881 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:35.145 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:35.145 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:35.145 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:35.145 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:35.409 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:35.409 [580/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:35.409 [581/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:35.409 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:35.409 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:35.673 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:35.939 [585/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:35.939 [586/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:36.203 [587/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.203 [588/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:36.203 [589/745] Linking target lib/librte_eal.so.23.0 00:02:36.204 [590/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:36.470 [591/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:36.470 [592/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:36.470 [593/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:36.470 [594/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:36.470 [595/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:36.470 [596/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.470 [597/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:36.734 [598/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:36.734 [599/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:36.734 [600/745] Linking target lib/librte_cfgfile.so.23.0 00:02:36.734 [601/745] Linking target lib/librte_ring.so.23.0 00:02:36.734 [602/745] Linking target lib/librte_jobstats.so.23.0 00:02:36.734 [603/745] Linking target lib/librte_meter.so.23.0 00:02:36.734 [604/745] Linking target lib/librte_pci.so.23.0 00:02:36.734 [605/745] Linking target lib/librte_timer.so.23.0 00:02:36.734 [606/745] Linking target lib/librte_rawdev.so.23.0 00:02:36.734 [607/745] Linking target lib/librte_acl.so.23.0 00:02:36.734 [608/745] Linking target lib/librte_dmadev.so.23.0 00:02:36.734 [609/745] Linking target lib/librte_stack.so.23.0 00:02:36.734 [610/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:36.734 [611/745] Linking target lib/librte_graph.so.23.0 00:02:36.734 [612/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:36.734 [613/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:36.734 [614/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:36.996 [615/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:36.996 [616/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:36.996 [617/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:36.996 [618/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:36.996 [619/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:36.996 [620/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:36.996 [621/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:36.996 [622/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:36.996 [623/745] Linking target lib/librte_rcu.so.23.0 00:02:36.996 [624/745] Linking target lib/librte_mempool.so.23.0 00:02:36.996 [625/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:36.996 [626/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:36.996 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:36.996 [628/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:36.996 [629/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:36.996 [630/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:36.996 [631/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:36.996 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:36.996 [633/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:37.255 [634/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:37.255 [635/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:37.255 [636/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:37.255 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:37.255 [638/745] Linking target lib/librte_rib.so.23.0 00:02:37.255 [639/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:37.255 [640/745] Linking target lib/librte_mbuf.so.23.0 00:02:37.255 [641/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:37.255 [642/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:37.255 [643/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:37.255 [644/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:37.255 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:37.255 [646/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:37.514 [647/745] Linking target lib/librte_fib.so.23.0 00:02:37.514 [648/745] Linking target lib/librte_gpudev.so.23.0 00:02:37.514 [649/745] Linking target lib/librte_reorder.so.23.0 00:02:37.514 [650/745] Linking target lib/librte_distributor.so.23.0 00:02:37.514 [651/745] Linking target lib/librte_compressdev.so.23.0 00:02:37.514 [652/745] Linking target lib/librte_bbdev.so.23.0 00:02:37.514 [653/745] Linking target lib/librte_regexdev.so.23.0 00:02:37.514 [654/745] Linking target lib/librte_net.so.23.0 00:02:37.514 [655/745] Linking target lib/librte_sched.so.23.0 00:02:37.514 [656/745] Linking target lib/librte_cryptodev.so.23.0 00:02:37.514 [657/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:37.514 [658/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:37.514 [659/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:37.514 [660/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:37.514 [661/745] Linking target lib/librte_security.so.23.0 00:02:37.514 [662/745] Linking target lib/librte_hash.so.23.0 00:02:37.514 [663/745] Linking target lib/librte_cmdline.so.23.0 00:02:37.514 [664/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:37.774 [665/745] Linking target lib/librte_ethdev.so.23.0 00:02:37.774 [666/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:37.774 [667/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:37.774 [668/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:37.774 [669/745] Linking target lib/librte_efd.so.23.0 00:02:37.774 [670/745] Linking target lib/librte_lpm.so.23.0 00:02:37.774 [671/745] Linking target lib/librte_member.so.23.0 00:02:37.774 [672/745] Linking target lib/librte_ipsec.so.23.0 00:02:37.774 [673/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:37.774 [674/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:37.774 [675/745] Linking target lib/librte_metrics.so.23.0 00:02:37.774 [676/745] Linking target lib/librte_pcapng.so.23.0 00:02:37.774 [677/745] Linking target lib/librte_gso.so.23.0 00:02:37.774 [678/745] Linking target lib/librte_ip_frag.so.23.0 00:02:37.774 [679/745] Linking target lib/librte_bpf.so.23.0 00:02:37.774 [680/745] Linking target lib/librte_gro.so.23.0 00:02:38.032 [681/745] Linking target lib/librte_power.so.23.0 00:02:38.032 [682/745] Linking target lib/librte_eventdev.so.23.0 00:02:38.032 [683/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:38.032 [684/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:38.032 [685/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:38.032 [686/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:38.032 [687/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:38.032 [688/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:38.032 [689/745] Linking target lib/librte_pdump.so.23.0 00:02:38.032 [690/745] Linking target lib/librte_latencystats.so.23.0 00:02:38.032 [691/745] Linking target lib/librte_bitratestats.so.23.0 00:02:38.032 [692/745] Linking target lib/librte_port.so.23.0 00:02:38.032 [693/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:38.291 [694/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:38.291 [695/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:38.291 [696/745] Linking target lib/librte_table.so.23.0 00:02:38.291 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:38.291 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:38.549 [699/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:38.549 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:38.807 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:38.807 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:38.807 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:38.807 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:39.373 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:39.373 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:39.373 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:39.373 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:39.633 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:39.633 [710/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:39.633 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.891 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:40.829 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:40.829 [714/745] Linking static target lib/librte_node.a 00:02:41.087 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.087 [716/745] Linking target lib/librte_node.so.23.0 00:02:41.345 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:41.604 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:42.171 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:50.326 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:22.403 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.403 [722/745] Linking static target lib/librte_vhost.a 00:03:22.403 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.403 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:32.386 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:32.386 [726/745] Linking static target lib/librte_pipeline.a 00:03:32.646 [727/745] Linking target app/dpdk-test-fib 00:03:32.647 [728/745] Linking target app/dpdk-test-sad 00:03:32.647 [729/745] Linking target app/dpdk-test-flow-perf 00:03:32.647 [730/745] Linking target app/dpdk-test-cmdline 00:03:32.647 [731/745] Linking target app/dpdk-proc-info 00:03:32.647 [732/745] Linking target app/dpdk-test-eventdev 00:03:32.647 [733/745] Linking target app/dpdk-test-pipeline 00:03:32.647 [734/745] Linking target app/dpdk-test-regex 00:03:32.647 [735/745] Linking target app/dpdk-pdump 00:03:32.647 [736/745] Linking target app/dpdk-test-security-perf 00:03:32.647 [737/745] Linking target app/dpdk-dumpcap 00:03:32.647 [738/745] Linking target app/dpdk-test-acl 00:03:32.647 [739/745] Linking target app/dpdk-test-gpudev 00:03:32.647 [740/745] Linking target app/dpdk-test-bbdev 00:03:32.647 [741/745] Linking target app/dpdk-test-crypto-perf 00:03:32.647 [742/745] Linking target app/dpdk-test-compress-perf 00:03:32.647 [743/745] Linking target app/dpdk-testpmd 00:03:34.549 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.549 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:34.549 10:57:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:34.549 10:57:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:34.550 10:57:59 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:34.550 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:34.550 [0/1] Installing files. 00:03:35.122 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.122 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.123 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.124 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.125 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.126 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.127 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.128 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.128 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.128 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.129 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.129 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.129 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.129 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.129 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.129 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:35.703 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:35.703 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:35.703 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.703 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:35.703 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:35.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:35.707 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:35.707 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:35.707 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:35.707 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:35.707 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:35.707 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:35.707 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:35.707 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:35.707 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:35.707 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:35.707 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:35.707 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:35.707 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:35.707 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:35.707 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:35.707 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:35.707 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:35.707 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:35.708 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:35.708 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:35.708 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:35.708 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:35.708 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:35.708 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:35.708 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:35.708 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:35.708 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:35.708 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:35.708 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:35.708 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:35.708 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:35.708 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:35.708 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:35.708 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:35.708 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:35.708 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:35.708 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:35.708 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:35.708 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:35.708 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:35.708 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:35.708 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:35.708 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:35.708 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:35.708 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:35.708 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:35.708 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:35.708 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:35.708 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:35.708 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:35.708 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:35.708 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:35.708 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:35.708 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:35.708 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:35.708 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:35.708 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:35.708 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:35.708 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:35.708 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:35.708 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:35.708 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:35.708 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:35.708 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:35.708 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:35.708 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:35.708 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:35.708 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:35.708 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:35.708 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:35.708 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:35.708 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:35.708 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:35.708 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:35.708 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:35.708 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:35.708 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:35.708 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:35.708 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:35.708 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:35.708 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:35.708 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:35.708 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:35.708 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:35.708 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:35.709 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:35.709 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:35.709 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:35.709 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:35.709 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:35.709 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:35.709 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:35.709 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:35.709 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:35.709 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:35.709 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:35.709 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:35.709 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:35.709 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:35.709 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:35.709 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:35.709 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:35.709 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:35.709 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:35.709 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:35.709 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:35.709 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:35.709 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:35.709 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:35.709 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:35.709 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:35.709 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:35.709 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:35.709 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:35.709 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:35.709 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:35.709 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:35.709 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:35.709 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:35.709 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:35.709 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:35.709 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:35.709 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:35.709 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:35.709 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:35.709 10:58:00 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:35.709 10:58:00 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:35.709 00:03:35.709 real 1m24.103s 00:03:35.709 user 14m26.628s 00:03:35.709 sys 1m53.851s 00:03:35.709 10:58:00 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:35.709 10:58:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:35.709 ************************************ 00:03:35.709 END TEST build_native_dpdk 00:03:35.709 ************************************ 00:03:35.709 10:58:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:35.709 10:58:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:35.709 10:58:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:35.709 10:58:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:35.709 10:58:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:35.709 10:58:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:35.709 10:58:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:35.709 10:58:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:35.709 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:35.971 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:35.971 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:35.971 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:36.230 Using 'verbs' RDMA provider 00:03:47.165 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:57.167 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:57.167 Creating mk/config.mk...done. 00:03:57.167 Creating mk/cc.flags.mk...done. 00:03:57.167 Type 'make' to build. 00:03:57.167 10:58:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:57.167 10:58:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:57.167 10:58:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:57.167 10:58:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.167 ************************************ 00:03:57.167 START TEST make 00:03:57.167 ************************************ 00:03:57.167 10:58:20 make -- common/autotest_common.sh@1129 -- $ make -j48 00:03:57.167 make[1]: Nothing to be done for 'all'. 00:03:58.565 The Meson build system 00:03:58.565 Version: 1.5.0 00:03:58.565 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:58.565 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:58.565 Build type: native build 00:03:58.565 Project name: libvfio-user 00:03:58.565 Project version: 0.0.1 00:03:58.565 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.565 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:58.565 Host machine cpu family: x86_64 00:03:58.565 Host machine cpu: x86_64 00:03:58.565 Run-time dependency threads found: YES 00:03:58.565 Library dl found: YES 00:03:58.565 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.565 Run-time dependency json-c found: YES 0.17 00:03:58.565 Run-time dependency cmocka found: YES 1.1.7 00:03:58.565 Program pytest-3 found: NO 00:03:58.565 Program flake8 found: NO 00:03:58.565 Program misspell-fixer found: NO 00:03:58.565 Program restructuredtext-lint found: NO 00:03:58.565 Program valgrind found: YES (/usr/bin/valgrind) 00:03:58.565 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:58.565 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:58.565 Compiler for C supports arguments -Wwrite-strings: YES 00:03:58.565 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.565 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:58.565 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:58.565 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.565 Build targets in project: 8 00:03:58.566 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:58.566 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:58.566 00:03:58.566 libvfio-user 0.0.1 00:03:58.566 00:03:58.566 User defined options 00:03:58.566 buildtype : debug 00:03:58.566 default_library: shared 00:03:58.566 libdir : /usr/local/lib 00:03:58.566 00:03:58.566 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.525 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:59.525 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:59.525 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:59.525 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:59.526 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:59.526 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:59.526 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:59.526 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:59.526 [8/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:59.526 [9/37] Compiling C object samples/null.p/null.c.o 00:03:59.526 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:59.526 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:59.526 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:59.526 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:59.526 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:59.526 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:59.526 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:59.526 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:59.526 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:59.526 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:59.526 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:59.526 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:59.526 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:59.790 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:59.790 [24/37] Compiling C object samples/server.p/server.c.o 00:03:59.790 [25/37] Compiling C object samples/client.p/client.c.o 00:03:59.790 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:59.790 [27/37] Linking target samples/client 00:03:59.790 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:59.790 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:59.790 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:00.057 [31/37] Linking target test/unit_tests 00:04:00.057 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:00.057 [33/37] Linking target samples/server 00:04:00.057 [34/37] Linking target samples/null 00:04:00.057 [35/37] Linking target samples/lspci 00:04:00.057 [36/37] Linking target samples/gpio-pci-idio-16 00:04:00.057 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:00.057 INFO: autodetecting backend as ninja 00:04:00.057 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:00.322 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:01.273 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:01.273 ninja: no work to do. 00:04:39.993 CC lib/log/log.o 00:04:39.993 CC lib/ut_mock/mock.o 00:04:39.993 CC lib/log/log_flags.o 00:04:39.993 CC lib/ut/ut.o 00:04:39.993 CC lib/log/log_deprecated.o 00:04:39.993 LIB libspdk_ut_mock.a 00:04:39.993 LIB libspdk_ut.a 00:04:39.993 LIB libspdk_log.a 00:04:39.993 SO libspdk_ut_mock.so.6.0 00:04:39.993 SO libspdk_ut.so.2.0 00:04:39.993 SO libspdk_log.so.7.1 00:04:39.993 SYMLINK libspdk_ut_mock.so 00:04:39.993 SYMLINK libspdk_ut.so 00:04:39.993 SYMLINK libspdk_log.so 00:04:39.993 CC lib/dma/dma.o 00:04:39.993 CXX lib/trace_parser/trace.o 00:04:39.993 CC lib/ioat/ioat.o 00:04:39.993 CC lib/util/base64.o 00:04:39.993 CC lib/util/bit_array.o 00:04:39.993 CC lib/util/cpuset.o 00:04:39.993 CC lib/util/crc16.o 00:04:39.993 CC lib/util/crc32.o 00:04:39.993 CC lib/util/crc32c.o 00:04:39.993 CC lib/util/crc32_ieee.o 00:04:39.993 CC lib/util/crc64.o 00:04:39.993 CC lib/util/dif.o 00:04:39.993 CC lib/util/fd.o 00:04:39.993 CC lib/util/fd_group.o 00:04:39.993 CC lib/util/file.o 00:04:39.993 CC lib/util/hexlify.o 00:04:39.993 CC lib/util/iov.o 00:04:39.993 CC lib/util/math.o 00:04:39.993 CC lib/util/net.o 00:04:39.993 CC lib/util/pipe.o 00:04:39.993 CC lib/util/strerror_tls.o 00:04:39.993 CC lib/util/uuid.o 00:04:39.993 CC lib/util/string.o 00:04:39.993 CC lib/util/xor.o 00:04:39.993 CC lib/util/zipf.o 00:04:39.993 CC lib/util/md5.o 00:04:39.993 CC lib/vfio_user/host/vfio_user_pci.o 00:04:39.993 CC lib/vfio_user/host/vfio_user.o 00:04:39.993 LIB libspdk_dma.a 00:04:39.993 SO libspdk_dma.so.5.0 00:04:39.993 SYMLINK libspdk_dma.so 00:04:39.993 LIB libspdk_ioat.a 00:04:39.993 SO libspdk_ioat.so.7.0 00:04:39.993 LIB libspdk_vfio_user.a 00:04:39.993 SYMLINK libspdk_ioat.so 00:04:39.993 SO libspdk_vfio_user.so.5.0 00:04:39.993 SYMLINK libspdk_vfio_user.so 00:04:39.993 LIB libspdk_util.a 00:04:39.993 SO libspdk_util.so.10.1 00:04:39.993 SYMLINK libspdk_util.so 00:04:39.993 CC lib/conf/conf.o 00:04:39.993 CC lib/json/json_parse.o 00:04:39.993 CC lib/idxd/idxd.o 00:04:39.993 CC lib/json/json_util.o 00:04:39.993 CC lib/idxd/idxd_user.o 00:04:39.993 CC lib/json/json_write.o 00:04:39.993 CC lib/vmd/vmd.o 00:04:39.993 CC lib/idxd/idxd_kernel.o 00:04:39.993 CC lib/rdma_utils/rdma_utils.o 00:04:39.993 CC lib/vmd/led.o 00:04:39.993 CC lib/env_dpdk/env.o 00:04:39.993 CC lib/env_dpdk/memory.o 00:04:39.993 CC lib/env_dpdk/pci.o 00:04:39.993 CC lib/env_dpdk/init.o 00:04:39.993 CC lib/env_dpdk/threads.o 00:04:39.993 CC lib/env_dpdk/pci_ioat.o 00:04:39.993 CC lib/env_dpdk/pci_virtio.o 00:04:39.993 CC lib/env_dpdk/pci_vmd.o 00:04:39.993 CC lib/env_dpdk/pci_idxd.o 00:04:39.993 CC lib/env_dpdk/pci_event.o 00:04:39.993 CC lib/env_dpdk/sigbus_handler.o 00:04:39.993 CC lib/env_dpdk/pci_dpdk.o 00:04:39.993 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:39.993 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.993 LIB libspdk_rdma_utils.a 00:04:39.993 LIB libspdk_json.a 00:04:39.993 SO libspdk_rdma_utils.so.1.0 00:04:39.993 LIB libspdk_conf.a 00:04:39.993 SO libspdk_json.so.6.0 00:04:39.993 SO libspdk_conf.so.6.0 00:04:39.993 SYMLINK libspdk_rdma_utils.so 00:04:39.993 SYMLINK libspdk_conf.so 00:04:39.994 SYMLINK libspdk_json.so 00:04:39.994 CC lib/rdma_provider/common.o 00:04:39.994 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:39.994 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.994 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.994 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.994 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.994 LIB libspdk_idxd.a 00:04:39.994 SO libspdk_idxd.so.12.1 00:04:39.994 LIB libspdk_vmd.a 00:04:39.994 SO libspdk_vmd.so.6.0 00:04:39.994 SYMLINK libspdk_idxd.so 00:04:39.994 SYMLINK libspdk_vmd.so 00:04:39.994 LIB libspdk_rdma_provider.a 00:04:39.994 SO libspdk_rdma_provider.so.7.0 00:04:39.994 LIB libspdk_jsonrpc.a 00:04:39.994 SYMLINK libspdk_rdma_provider.so 00:04:39.994 SO libspdk_jsonrpc.so.6.0 00:04:39.994 SYMLINK libspdk_jsonrpc.so 00:04:39.994 LIB libspdk_trace_parser.a 00:04:39.994 SO libspdk_trace_parser.so.6.0 00:04:39.994 SYMLINK libspdk_trace_parser.so 00:04:39.994 CC lib/rpc/rpc.o 00:04:39.994 LIB libspdk_rpc.a 00:04:39.994 SO libspdk_rpc.so.6.0 00:04:39.994 SYMLINK libspdk_rpc.so 00:04:39.994 CC lib/notify/notify.o 00:04:39.994 CC lib/trace/trace.o 00:04:39.994 CC lib/trace/trace_flags.o 00:04:39.994 CC lib/notify/notify_rpc.o 00:04:39.994 CC lib/trace/trace_rpc.o 00:04:39.994 CC lib/keyring/keyring.o 00:04:39.994 CC lib/keyring/keyring_rpc.o 00:04:40.253 LIB libspdk_notify.a 00:04:40.253 SO libspdk_notify.so.6.0 00:04:40.253 SYMLINK libspdk_notify.so 00:04:40.253 LIB libspdk_keyring.a 00:04:40.253 LIB libspdk_trace.a 00:04:40.253 SO libspdk_keyring.so.2.0 00:04:40.253 SO libspdk_trace.so.11.0 00:04:40.253 SYMLINK libspdk_keyring.so 00:04:40.512 SYMLINK libspdk_trace.so 00:04:40.512 CC lib/thread/thread.o 00:04:40.512 CC lib/thread/iobuf.o 00:04:40.512 CC lib/sock/sock.o 00:04:40.512 CC lib/sock/sock_rpc.o 00:04:40.512 LIB libspdk_env_dpdk.a 00:04:40.771 SO libspdk_env_dpdk.so.15.1 00:04:40.771 SYMLINK libspdk_env_dpdk.so 00:04:41.030 LIB libspdk_sock.a 00:04:41.030 SO libspdk_sock.so.10.0 00:04:41.030 SYMLINK libspdk_sock.so 00:04:41.290 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:41.290 CC lib/nvme/nvme_ctrlr.o 00:04:41.290 CC lib/nvme/nvme_fabric.o 00:04:41.290 CC lib/nvme/nvme_ns_cmd.o 00:04:41.290 CC lib/nvme/nvme_ns.o 00:04:41.290 CC lib/nvme/nvme_pcie_common.o 00:04:41.290 CC lib/nvme/nvme_pcie.o 00:04:41.290 CC lib/nvme/nvme_qpair.o 00:04:41.290 CC lib/nvme/nvme.o 00:04:41.290 CC lib/nvme/nvme_quirks.o 00:04:41.290 CC lib/nvme/nvme_transport.o 00:04:41.290 CC lib/nvme/nvme_discovery.o 00:04:41.290 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:41.290 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:41.290 CC lib/nvme/nvme_tcp.o 00:04:41.290 CC lib/nvme/nvme_opal.o 00:04:41.290 CC lib/nvme/nvme_io_msg.o 00:04:41.290 CC lib/nvme/nvme_poll_group.o 00:04:41.290 CC lib/nvme/nvme_zns.o 00:04:41.290 CC lib/nvme/nvme_stubs.o 00:04:41.290 CC lib/nvme/nvme_auth.o 00:04:41.290 CC lib/nvme/nvme_cuse.o 00:04:41.290 CC lib/nvme/nvme_vfio_user.o 00:04:41.290 CC lib/nvme/nvme_rdma.o 00:04:42.228 LIB libspdk_thread.a 00:04:42.228 SO libspdk_thread.so.11.0 00:04:42.228 SYMLINK libspdk_thread.so 00:04:42.487 CC lib/accel/accel.o 00:04:42.487 CC lib/accel/accel_rpc.o 00:04:42.487 CC lib/accel/accel_sw.o 00:04:42.487 CC lib/fsdev/fsdev.o 00:04:42.487 CC lib/fsdev/fsdev_io.o 00:04:42.487 CC lib/fsdev/fsdev_rpc.o 00:04:42.487 CC lib/vfu_tgt/tgt_endpoint.o 00:04:42.487 CC lib/blob/blobstore.o 00:04:42.487 CC lib/vfu_tgt/tgt_rpc.o 00:04:42.487 CC lib/virtio/virtio.o 00:04:42.487 CC lib/init/json_config.o 00:04:42.487 CC lib/blob/request.o 00:04:42.487 CC lib/init/subsystem.o 00:04:42.487 CC lib/virtio/virtio_vhost_user.o 00:04:42.487 CC lib/blob/zeroes.o 00:04:42.487 CC lib/init/subsystem_rpc.o 00:04:42.487 CC lib/virtio/virtio_vfio_user.o 00:04:42.487 CC lib/blob/blob_bs_dev.o 00:04:42.487 CC lib/virtio/virtio_pci.o 00:04:42.487 CC lib/init/rpc.o 00:04:42.746 LIB libspdk_init.a 00:04:42.746 SO libspdk_init.so.6.0 00:04:42.746 LIB libspdk_virtio.a 00:04:42.746 LIB libspdk_vfu_tgt.a 00:04:42.746 SYMLINK libspdk_init.so 00:04:42.746 SO libspdk_virtio.so.7.0 00:04:42.746 SO libspdk_vfu_tgt.so.3.0 00:04:43.003 SYMLINK libspdk_vfu_tgt.so 00:04:43.003 SYMLINK libspdk_virtio.so 00:04:43.003 CC lib/event/app.o 00:04:43.003 CC lib/event/reactor.o 00:04:43.003 CC lib/event/log_rpc.o 00:04:43.003 CC lib/event/app_rpc.o 00:04:43.003 CC lib/event/scheduler_static.o 00:04:43.003 LIB libspdk_fsdev.a 00:04:43.262 SO libspdk_fsdev.so.2.0 00:04:43.262 SYMLINK libspdk_fsdev.so 00:04:43.262 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:43.521 LIB libspdk_event.a 00:04:43.521 SO libspdk_event.so.14.0 00:04:43.521 SYMLINK libspdk_event.so 00:04:43.521 LIB libspdk_accel.a 00:04:43.521 SO libspdk_accel.so.16.0 00:04:43.780 SYMLINK libspdk_accel.so 00:04:43.780 LIB libspdk_nvme.a 00:04:43.780 CC lib/bdev/bdev.o 00:04:43.780 CC lib/bdev/bdev_rpc.o 00:04:43.780 CC lib/bdev/bdev_zone.o 00:04:43.780 CC lib/bdev/part.o 00:04:43.780 CC lib/bdev/scsi_nvme.o 00:04:43.780 SO libspdk_nvme.so.15.0 00:04:44.040 LIB libspdk_fuse_dispatcher.a 00:04:44.040 SO libspdk_fuse_dispatcher.so.1.0 00:04:44.040 SYMLINK libspdk_fuse_dispatcher.so 00:04:44.040 SYMLINK libspdk_nvme.so 00:04:45.416 LIB libspdk_blob.a 00:04:45.676 SO libspdk_blob.so.11.0 00:04:45.676 SYMLINK libspdk_blob.so 00:04:45.935 CC lib/lvol/lvol.o 00:04:45.935 CC lib/blobfs/blobfs.o 00:04:45.935 CC lib/blobfs/tree.o 00:04:46.502 LIB libspdk_bdev.a 00:04:46.502 SO libspdk_bdev.so.17.0 00:04:46.502 LIB libspdk_blobfs.a 00:04:46.766 SYMLINK libspdk_bdev.so 00:04:46.766 SO libspdk_blobfs.so.10.0 00:04:46.766 SYMLINK libspdk_blobfs.so 00:04:46.766 LIB libspdk_lvol.a 00:04:46.766 CC lib/nbd/nbd.o 00:04:46.766 CC lib/nbd/nbd_rpc.o 00:04:46.766 CC lib/ublk/ublk.o 00:04:46.766 CC lib/ublk/ublk_rpc.o 00:04:46.766 CC lib/scsi/dev.o 00:04:46.766 CC lib/nvmf/ctrlr.o 00:04:46.766 CC lib/scsi/lun.o 00:04:46.766 CC lib/nvmf/ctrlr_discovery.o 00:04:46.766 CC lib/ftl/ftl_core.o 00:04:46.766 CC lib/scsi/port.o 00:04:46.766 CC lib/nvmf/ctrlr_bdev.o 00:04:46.766 CC lib/scsi/scsi.o 00:04:46.766 CC lib/nvmf/subsystem.o 00:04:46.766 CC lib/ftl/ftl_init.o 00:04:46.766 CC lib/nvmf/nvmf.o 00:04:46.766 CC lib/scsi/scsi_bdev.o 00:04:46.766 CC lib/ftl/ftl_layout.o 00:04:46.766 CC lib/ftl/ftl_debug.o 00:04:46.766 CC lib/nvmf/nvmf_rpc.o 00:04:46.766 CC lib/scsi/scsi_pr.o 00:04:46.766 CC lib/scsi/scsi_rpc.o 00:04:46.767 CC lib/nvmf/transport.o 00:04:46.767 SO libspdk_lvol.so.10.0 00:04:46.767 CC lib/ftl/ftl_sb.o 00:04:46.767 CC lib/ftl/ftl_io.o 00:04:46.767 CC lib/nvmf/tcp.o 00:04:46.767 CC lib/scsi/task.o 00:04:46.767 CC lib/ftl/ftl_l2p.o 00:04:46.767 CC lib/nvmf/mdns_server.o 00:04:46.767 CC lib/ftl/ftl_l2p_flat.o 00:04:46.767 CC lib/nvmf/stubs.o 00:04:46.767 CC lib/ftl/ftl_nv_cache.o 00:04:46.767 CC lib/nvmf/vfio_user.o 00:04:46.767 CC lib/ftl/ftl_band.o 00:04:46.767 CC lib/nvmf/rdma.o 00:04:46.767 CC lib/ftl/ftl_band_ops.o 00:04:46.767 CC lib/nvmf/auth.o 00:04:46.767 CC lib/ftl/ftl_writer.o 00:04:46.767 CC lib/ftl/ftl_rq.o 00:04:46.767 CC lib/ftl/ftl_reloc.o 00:04:46.767 CC lib/ftl/ftl_l2p_cache.o 00:04:46.767 CC lib/ftl/ftl_p2l.o 00:04:46.767 CC lib/ftl/ftl_p2l_log.o 00:04:46.767 CC lib/ftl/mngt/ftl_mngt.o 00:04:46.767 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:46.767 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:46.767 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:46.767 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:47.029 SYMLINK libspdk_lvol.so 00:04:47.029 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:47.292 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:47.292 CC lib/ftl/utils/ftl_conf.o 00:04:47.292 CC lib/ftl/utils/ftl_md.o 00:04:47.292 CC lib/ftl/utils/ftl_mempool.o 00:04:47.292 CC lib/ftl/utils/ftl_bitmap.o 00:04:47.292 CC lib/ftl/utils/ftl_property.o 00:04:47.292 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:47.292 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:47.292 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:47.292 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:47.292 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:47.292 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:47.555 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:47.555 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:47.555 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:47.555 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:47.555 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:47.555 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:47.555 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:47.555 CC lib/ftl/base/ftl_base_dev.o 00:04:47.555 CC lib/ftl/base/ftl_base_bdev.o 00:04:47.555 CC lib/ftl/ftl_trace.o 00:04:47.555 LIB libspdk_nbd.a 00:04:47.555 SO libspdk_nbd.so.7.0 00:04:47.815 LIB libspdk_scsi.a 00:04:47.815 SYMLINK libspdk_nbd.so 00:04:47.815 SO libspdk_scsi.so.9.0 00:04:47.815 SYMLINK libspdk_scsi.so 00:04:48.074 LIB libspdk_ublk.a 00:04:48.074 SO libspdk_ublk.so.3.0 00:04:48.074 CC lib/iscsi/conn.o 00:04:48.074 CC lib/vhost/vhost.o 00:04:48.074 CC lib/iscsi/init_grp.o 00:04:48.074 SYMLINK libspdk_ublk.so 00:04:48.075 CC lib/iscsi/iscsi.o 00:04:48.075 CC lib/vhost/vhost_rpc.o 00:04:48.075 CC lib/vhost/vhost_scsi.o 00:04:48.075 CC lib/iscsi/param.o 00:04:48.075 CC lib/vhost/vhost_blk.o 00:04:48.075 CC lib/iscsi/portal_grp.o 00:04:48.075 CC lib/vhost/rte_vhost_user.o 00:04:48.075 CC lib/iscsi/tgt_node.o 00:04:48.075 CC lib/iscsi/iscsi_subsystem.o 00:04:48.075 CC lib/iscsi/iscsi_rpc.o 00:04:48.075 CC lib/iscsi/task.o 00:04:48.333 LIB libspdk_ftl.a 00:04:48.590 SO libspdk_ftl.so.9.0 00:04:48.848 SYMLINK libspdk_ftl.so 00:04:49.417 LIB libspdk_vhost.a 00:04:49.417 SO libspdk_vhost.so.8.0 00:04:49.417 SYMLINK libspdk_vhost.so 00:04:49.417 LIB libspdk_iscsi.a 00:04:49.681 LIB libspdk_nvmf.a 00:04:49.681 SO libspdk_iscsi.so.8.0 00:04:49.681 SO libspdk_nvmf.so.20.0 00:04:49.681 SYMLINK libspdk_iscsi.so 00:04:49.681 SYMLINK libspdk_nvmf.so 00:04:49.940 CC module/vfu_device/vfu_virtio.o 00:04:49.940 CC module/vfu_device/vfu_virtio_blk.o 00:04:49.940 CC module/vfu_device/vfu_virtio_scsi.o 00:04:49.940 CC module/vfu_device/vfu_virtio_rpc.o 00:04:49.940 CC module/vfu_device/vfu_virtio_fs.o 00:04:49.940 CC module/env_dpdk/env_dpdk_rpc.o 00:04:50.199 CC module/sock/posix/posix.o 00:04:50.199 CC module/keyring/file/keyring_rpc.o 00:04:50.199 CC module/keyring/file/keyring.o 00:04:50.199 CC module/keyring/linux/keyring.o 00:04:50.199 CC module/keyring/linux/keyring_rpc.o 00:04:50.199 CC module/accel/ioat/accel_ioat.o 00:04:50.199 CC module/accel/dsa/accel_dsa.o 00:04:50.199 CC module/accel/ioat/accel_ioat_rpc.o 00:04:50.199 CC module/fsdev/aio/fsdev_aio.o 00:04:50.199 CC module/accel/dsa/accel_dsa_rpc.o 00:04:50.199 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:50.199 CC module/fsdev/aio/linux_aio_mgr.o 00:04:50.199 CC module/scheduler/gscheduler/gscheduler.o 00:04:50.199 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:50.199 CC module/blob/bdev/blob_bdev.o 00:04:50.199 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:50.199 CC module/accel/iaa/accel_iaa.o 00:04:50.199 CC module/accel/error/accel_error.o 00:04:50.199 CC module/accel/error/accel_error_rpc.o 00:04:50.199 CC module/accel/iaa/accel_iaa_rpc.o 00:04:50.199 LIB libspdk_env_dpdk_rpc.a 00:04:50.199 SO libspdk_env_dpdk_rpc.so.6.0 00:04:50.199 SYMLINK libspdk_env_dpdk_rpc.so 00:04:50.458 LIB libspdk_keyring_file.a 00:04:50.458 LIB libspdk_keyring_linux.a 00:04:50.458 LIB libspdk_scheduler_dpdk_governor.a 00:04:50.458 SO libspdk_keyring_linux.so.1.0 00:04:50.458 SO libspdk_keyring_file.so.2.0 00:04:50.458 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:50.458 LIB libspdk_accel_error.a 00:04:50.458 LIB libspdk_accel_ioat.a 00:04:50.458 SYMLINK libspdk_keyring_file.so 00:04:50.458 SYMLINK libspdk_keyring_linux.so 00:04:50.458 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:50.458 SO libspdk_accel_error.so.2.0 00:04:50.458 SO libspdk_accel_ioat.so.6.0 00:04:50.458 LIB libspdk_scheduler_gscheduler.a 00:04:50.458 SO libspdk_scheduler_gscheduler.so.4.0 00:04:50.458 SYMLINK libspdk_accel_error.so 00:04:50.458 LIB libspdk_scheduler_dynamic.a 00:04:50.458 SYMLINK libspdk_accel_ioat.so 00:04:50.458 LIB libspdk_blob_bdev.a 00:04:50.458 LIB libspdk_accel_iaa.a 00:04:50.458 LIB libspdk_accel_dsa.a 00:04:50.458 SO libspdk_scheduler_dynamic.so.4.0 00:04:50.458 SYMLINK libspdk_scheduler_gscheduler.so 00:04:50.458 SO libspdk_blob_bdev.so.11.0 00:04:50.458 SO libspdk_accel_iaa.so.3.0 00:04:50.458 SO libspdk_accel_dsa.so.5.0 00:04:50.458 SYMLINK libspdk_scheduler_dynamic.so 00:04:50.458 SYMLINK libspdk_blob_bdev.so 00:04:50.458 SYMLINK libspdk_accel_iaa.so 00:04:50.458 SYMLINK libspdk_accel_dsa.so 00:04:50.717 LIB libspdk_vfu_device.a 00:04:50.717 SO libspdk_vfu_device.so.3.0 00:04:50.717 CC module/bdev/null/bdev_null.o 00:04:50.717 CC module/bdev/null/bdev_null_rpc.o 00:04:50.717 CC module/bdev/gpt/gpt.o 00:04:50.717 CC module/bdev/delay/vbdev_delay.o 00:04:50.717 CC module/bdev/gpt/vbdev_gpt.o 00:04:50.717 CC module/bdev/lvol/vbdev_lvol.o 00:04:50.717 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:50.717 CC module/blobfs/bdev/blobfs_bdev.o 00:04:50.717 CC module/bdev/error/vbdev_error.o 00:04:50.717 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:50.717 CC module/bdev/error/vbdev_error_rpc.o 00:04:50.717 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:50.717 CC module/bdev/malloc/bdev_malloc.o 00:04:50.717 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:50.717 CC module/bdev/split/vbdev_split.o 00:04:50.717 CC module/bdev/split/vbdev_split_rpc.o 00:04:50.717 CC module/bdev/passthru/vbdev_passthru.o 00:04:50.717 CC module/bdev/raid/bdev_raid.o 00:04:50.717 CC module/bdev/aio/bdev_aio.o 00:04:50.717 CC module/bdev/ftl/bdev_ftl.o 00:04:50.717 CC module/bdev/nvme/bdev_nvme.o 00:04:50.717 CC module/bdev/aio/bdev_aio_rpc.o 00:04:50.717 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:50.717 CC module/bdev/raid/bdev_raid_rpc.o 00:04:50.717 CC module/bdev/raid/bdev_raid_sb.o 00:04:50.717 CC module/bdev/iscsi/bdev_iscsi.o 00:04:50.717 CC module/bdev/raid/raid0.o 00:04:50.717 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:50.717 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:50.717 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:50.717 CC module/bdev/raid/raid1.o 00:04:50.717 CC module/bdev/nvme/nvme_rpc.o 00:04:50.717 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:50.717 CC module/bdev/raid/concat.o 00:04:50.717 CC module/bdev/nvme/bdev_mdns_client.o 00:04:50.717 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:50.717 CC module/bdev/nvme/vbdev_opal.o 00:04:50.717 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:50.717 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:50.717 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:50.717 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:50.717 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:50.976 SYMLINK libspdk_vfu_device.so 00:04:50.976 LIB libspdk_fsdev_aio.a 00:04:50.976 SO libspdk_fsdev_aio.so.1.0 00:04:50.976 LIB libspdk_sock_posix.a 00:04:50.976 SO libspdk_sock_posix.so.6.0 00:04:51.236 SYMLINK libspdk_fsdev_aio.so 00:04:51.236 SYMLINK libspdk_sock_posix.so 00:04:51.236 LIB libspdk_blobfs_bdev.a 00:04:51.236 SO libspdk_blobfs_bdev.so.6.0 00:04:51.236 LIB libspdk_bdev_split.a 00:04:51.236 LIB libspdk_bdev_ftl.a 00:04:51.236 LIB libspdk_bdev_gpt.a 00:04:51.236 SO libspdk_bdev_split.so.6.0 00:04:51.236 SYMLINK libspdk_blobfs_bdev.so 00:04:51.236 LIB libspdk_bdev_error.a 00:04:51.236 SO libspdk_bdev_gpt.so.6.0 00:04:51.236 SO libspdk_bdev_ftl.so.6.0 00:04:51.236 SO libspdk_bdev_error.so.6.0 00:04:51.236 LIB libspdk_bdev_null.a 00:04:51.495 SYMLINK libspdk_bdev_split.so 00:04:51.495 SYMLINK libspdk_bdev_gpt.so 00:04:51.495 SYMLINK libspdk_bdev_ftl.so 00:04:51.495 SO libspdk_bdev_null.so.6.0 00:04:51.495 LIB libspdk_bdev_aio.a 00:04:51.495 SYMLINK libspdk_bdev_error.so 00:04:51.495 LIB libspdk_bdev_passthru.a 00:04:51.495 SO libspdk_bdev_aio.so.6.0 00:04:51.495 LIB libspdk_bdev_iscsi.a 00:04:51.495 SYMLINK libspdk_bdev_null.so 00:04:51.495 SO libspdk_bdev_passthru.so.6.0 00:04:51.495 LIB libspdk_bdev_zone_block.a 00:04:51.495 SO libspdk_bdev_iscsi.so.6.0 00:04:51.495 LIB libspdk_bdev_delay.a 00:04:51.495 SO libspdk_bdev_zone_block.so.6.0 00:04:51.495 SYMLINK libspdk_bdev_aio.so 00:04:51.495 LIB libspdk_bdev_malloc.a 00:04:51.495 SO libspdk_bdev_delay.so.6.0 00:04:51.495 SYMLINK libspdk_bdev_passthru.so 00:04:51.495 SO libspdk_bdev_malloc.so.6.0 00:04:51.495 SYMLINK libspdk_bdev_iscsi.so 00:04:51.495 SYMLINK libspdk_bdev_zone_block.so 00:04:51.495 SYMLINK libspdk_bdev_delay.so 00:04:51.495 SYMLINK libspdk_bdev_malloc.so 00:04:51.495 LIB libspdk_bdev_lvol.a 00:04:51.754 LIB libspdk_bdev_virtio.a 00:04:51.754 SO libspdk_bdev_lvol.so.6.0 00:04:51.754 SO libspdk_bdev_virtio.so.6.0 00:04:51.754 SYMLINK libspdk_bdev_lvol.so 00:04:51.754 SYMLINK libspdk_bdev_virtio.so 00:04:52.013 LIB libspdk_bdev_raid.a 00:04:52.272 SO libspdk_bdev_raid.so.6.0 00:04:52.272 SYMLINK libspdk_bdev_raid.so 00:04:53.652 LIB libspdk_bdev_nvme.a 00:04:53.652 SO libspdk_bdev_nvme.so.7.1 00:04:53.652 SYMLINK libspdk_bdev_nvme.so 00:04:53.912 CC module/event/subsystems/sock/sock.o 00:04:53.912 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:53.912 CC module/event/subsystems/keyring/keyring.o 00:04:53.912 CC module/event/subsystems/iobuf/iobuf.o 00:04:53.912 CC module/event/subsystems/scheduler/scheduler.o 00:04:53.912 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:53.912 CC module/event/subsystems/fsdev/fsdev.o 00:04:53.912 CC module/event/subsystems/vmd/vmd.o 00:04:53.912 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:53.912 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:54.171 LIB libspdk_event_keyring.a 00:04:54.171 LIB libspdk_event_vhost_blk.a 00:04:54.171 LIB libspdk_event_fsdev.a 00:04:54.171 LIB libspdk_event_scheduler.a 00:04:54.171 LIB libspdk_event_vfu_tgt.a 00:04:54.171 LIB libspdk_event_vmd.a 00:04:54.171 LIB libspdk_event_sock.a 00:04:54.171 SO libspdk_event_keyring.so.1.0 00:04:54.171 LIB libspdk_event_iobuf.a 00:04:54.171 SO libspdk_event_fsdev.so.1.0 00:04:54.171 SO libspdk_event_vhost_blk.so.3.0 00:04:54.171 SO libspdk_event_scheduler.so.4.0 00:04:54.171 SO libspdk_event_vfu_tgt.so.3.0 00:04:54.171 SO libspdk_event_sock.so.5.0 00:04:54.171 SO libspdk_event_vmd.so.6.0 00:04:54.171 SO libspdk_event_iobuf.so.3.0 00:04:54.171 SYMLINK libspdk_event_keyring.so 00:04:54.171 SYMLINK libspdk_event_fsdev.so 00:04:54.171 SYMLINK libspdk_event_vhost_blk.so 00:04:54.171 SYMLINK libspdk_event_scheduler.so 00:04:54.171 SYMLINK libspdk_event_vfu_tgt.so 00:04:54.171 SYMLINK libspdk_event_sock.so 00:04:54.171 SYMLINK libspdk_event_vmd.so 00:04:54.171 SYMLINK libspdk_event_iobuf.so 00:04:54.429 CC module/event/subsystems/accel/accel.o 00:04:54.701 LIB libspdk_event_accel.a 00:04:54.701 SO libspdk_event_accel.so.6.0 00:04:54.701 SYMLINK libspdk_event_accel.so 00:04:54.964 CC module/event/subsystems/bdev/bdev.o 00:04:54.964 LIB libspdk_event_bdev.a 00:04:55.223 SO libspdk_event_bdev.so.6.0 00:04:55.223 SYMLINK libspdk_event_bdev.so 00:04:55.223 CC module/event/subsystems/nbd/nbd.o 00:04:55.223 CC module/event/subsystems/ublk/ublk.o 00:04:55.223 CC module/event/subsystems/scsi/scsi.o 00:04:55.223 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:55.223 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:55.482 LIB libspdk_event_nbd.a 00:04:55.482 LIB libspdk_event_ublk.a 00:04:55.482 LIB libspdk_event_scsi.a 00:04:55.482 SO libspdk_event_nbd.so.6.0 00:04:55.482 SO libspdk_event_ublk.so.3.0 00:04:55.482 SO libspdk_event_scsi.so.6.0 00:04:55.482 SYMLINK libspdk_event_ublk.so 00:04:55.482 SYMLINK libspdk_event_nbd.so 00:04:55.482 SYMLINK libspdk_event_scsi.so 00:04:55.482 LIB libspdk_event_nvmf.a 00:04:55.482 SO libspdk_event_nvmf.so.6.0 00:04:55.741 SYMLINK libspdk_event_nvmf.so 00:04:55.741 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:55.741 CC module/event/subsystems/iscsi/iscsi.o 00:04:55.741 LIB libspdk_event_vhost_scsi.a 00:04:55.999 SO libspdk_event_vhost_scsi.so.3.0 00:04:56.000 LIB libspdk_event_iscsi.a 00:04:56.000 SO libspdk_event_iscsi.so.6.0 00:04:56.000 SYMLINK libspdk_event_vhost_scsi.so 00:04:56.000 SYMLINK libspdk_event_iscsi.so 00:04:56.000 SO libspdk.so.6.0 00:04:56.000 SYMLINK libspdk.so 00:04:56.263 CC test/rpc_client/rpc_client_test.o 00:04:56.263 TEST_HEADER include/spdk/accel.h 00:04:56.263 TEST_HEADER include/spdk/accel_module.h 00:04:56.263 TEST_HEADER include/spdk/assert.h 00:04:56.263 TEST_HEADER include/spdk/barrier.h 00:04:56.263 TEST_HEADER include/spdk/bdev.h 00:04:56.263 TEST_HEADER include/spdk/base64.h 00:04:56.263 TEST_HEADER include/spdk/bdev_module.h 00:04:56.263 TEST_HEADER include/spdk/bdev_zone.h 00:04:56.264 TEST_HEADER include/spdk/bit_array.h 00:04:56.264 CC app/trace_record/trace_record.o 00:04:56.264 TEST_HEADER include/spdk/bit_pool.h 00:04:56.264 TEST_HEADER include/spdk/blob_bdev.h 00:04:56.264 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:56.264 CC app/spdk_top/spdk_top.o 00:04:56.264 TEST_HEADER include/spdk/blobfs.h 00:04:56.264 TEST_HEADER include/spdk/blob.h 00:04:56.264 TEST_HEADER include/spdk/conf.h 00:04:56.264 CXX app/trace/trace.o 00:04:56.264 TEST_HEADER include/spdk/config.h 00:04:56.264 TEST_HEADER include/spdk/cpuset.h 00:04:56.264 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.264 CC app/spdk_nvme_identify/identify.o 00:04:56.264 TEST_HEADER include/spdk/crc16.h 00:04:56.264 TEST_HEADER include/spdk/crc32.h 00:04:56.264 TEST_HEADER include/spdk/crc64.h 00:04:56.264 TEST_HEADER include/spdk/dif.h 00:04:56.264 TEST_HEADER include/spdk/dma.h 00:04:56.264 TEST_HEADER include/spdk/endian.h 00:04:56.264 CC app/spdk_lspci/spdk_lspci.o 00:04:56.264 TEST_HEADER include/spdk/env_dpdk.h 00:04:56.264 TEST_HEADER include/spdk/env.h 00:04:56.264 CC app/spdk_nvme_perf/perf.o 00:04:56.264 TEST_HEADER include/spdk/event.h 00:04:56.264 TEST_HEADER include/spdk/fd_group.h 00:04:56.264 TEST_HEADER include/spdk/fd.h 00:04:56.264 TEST_HEADER include/spdk/file.h 00:04:56.264 TEST_HEADER include/spdk/fsdev.h 00:04:56.264 TEST_HEADER include/spdk/fsdev_module.h 00:04:56.264 TEST_HEADER include/spdk/ftl.h 00:04:56.264 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:56.264 TEST_HEADER include/spdk/gpt_spec.h 00:04:56.264 TEST_HEADER include/spdk/hexlify.h 00:04:56.264 TEST_HEADER include/spdk/histogram_data.h 00:04:56.264 TEST_HEADER include/spdk/idxd.h 00:04:56.264 TEST_HEADER include/spdk/idxd_spec.h 00:04:56.264 TEST_HEADER include/spdk/init.h 00:04:56.264 TEST_HEADER include/spdk/ioat.h 00:04:56.264 TEST_HEADER include/spdk/ioat_spec.h 00:04:56.264 TEST_HEADER include/spdk/iscsi_spec.h 00:04:56.264 TEST_HEADER include/spdk/json.h 00:04:56.264 TEST_HEADER include/spdk/jsonrpc.h 00:04:56.264 TEST_HEADER include/spdk/keyring.h 00:04:56.264 TEST_HEADER include/spdk/likely.h 00:04:56.264 TEST_HEADER include/spdk/keyring_module.h 00:04:56.264 TEST_HEADER include/spdk/log.h 00:04:56.264 TEST_HEADER include/spdk/lvol.h 00:04:56.264 TEST_HEADER include/spdk/md5.h 00:04:56.264 TEST_HEADER include/spdk/memory.h 00:04:56.264 TEST_HEADER include/spdk/mmio.h 00:04:56.264 TEST_HEADER include/spdk/nbd.h 00:04:56.264 TEST_HEADER include/spdk/net.h 00:04:56.264 TEST_HEADER include/spdk/notify.h 00:04:56.264 TEST_HEADER include/spdk/nvme.h 00:04:56.264 TEST_HEADER include/spdk/nvme_intel.h 00:04:56.264 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:56.264 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:56.264 TEST_HEADER include/spdk/nvme_spec.h 00:04:56.264 TEST_HEADER include/spdk/nvme_zns.h 00:04:56.264 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:56.264 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:56.264 TEST_HEADER include/spdk/nvmf.h 00:04:56.264 TEST_HEADER include/spdk/nvmf_spec.h 00:04:56.264 TEST_HEADER include/spdk/nvmf_transport.h 00:04:56.264 TEST_HEADER include/spdk/opal.h 00:04:56.264 TEST_HEADER include/spdk/opal_spec.h 00:04:56.264 TEST_HEADER include/spdk/pci_ids.h 00:04:56.264 TEST_HEADER include/spdk/pipe.h 00:04:56.264 TEST_HEADER include/spdk/queue.h 00:04:56.264 TEST_HEADER include/spdk/rpc.h 00:04:56.264 TEST_HEADER include/spdk/reduce.h 00:04:56.264 TEST_HEADER include/spdk/scheduler.h 00:04:56.264 TEST_HEADER include/spdk/scsi.h 00:04:56.264 TEST_HEADER include/spdk/scsi_spec.h 00:04:56.264 TEST_HEADER include/spdk/sock.h 00:04:56.264 TEST_HEADER include/spdk/stdinc.h 00:04:56.264 TEST_HEADER include/spdk/string.h 00:04:56.264 TEST_HEADER include/spdk/thread.h 00:04:56.264 TEST_HEADER include/spdk/trace.h 00:04:56.264 TEST_HEADER include/spdk/trace_parser.h 00:04:56.264 TEST_HEADER include/spdk/tree.h 00:04:56.264 TEST_HEADER include/spdk/ublk.h 00:04:56.264 TEST_HEADER include/spdk/util.h 00:04:56.264 TEST_HEADER include/spdk/uuid.h 00:04:56.264 TEST_HEADER include/spdk/version.h 00:04:56.264 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:56.264 TEST_HEADER include/spdk/vhost.h 00:04:56.264 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:56.264 TEST_HEADER include/spdk/vmd.h 00:04:56.264 TEST_HEADER include/spdk/xor.h 00:04:56.264 TEST_HEADER include/spdk/zipf.h 00:04:56.264 CXX test/cpp_headers/accel_module.o 00:04:56.264 CXX test/cpp_headers/accel.o 00:04:56.264 CXX test/cpp_headers/assert.o 00:04:56.264 CXX test/cpp_headers/barrier.o 00:04:56.264 CXX test/cpp_headers/base64.o 00:04:56.264 CXX test/cpp_headers/bdev.o 00:04:56.264 CXX test/cpp_headers/bdev_module.o 00:04:56.264 CXX test/cpp_headers/bdev_zone.o 00:04:56.264 CXX test/cpp_headers/bit_array.o 00:04:56.264 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:56.264 CXX test/cpp_headers/bit_pool.o 00:04:56.264 CXX test/cpp_headers/blob_bdev.o 00:04:56.264 CXX test/cpp_headers/blobfs_bdev.o 00:04:56.264 CXX test/cpp_headers/blobfs.o 00:04:56.264 CXX test/cpp_headers/blob.o 00:04:56.264 CXX test/cpp_headers/conf.o 00:04:56.264 CXX test/cpp_headers/config.o 00:04:56.264 CXX test/cpp_headers/cpuset.o 00:04:56.264 CXX test/cpp_headers/crc16.o 00:04:56.264 CC app/spdk_dd/spdk_dd.o 00:04:56.264 CC app/nvmf_tgt/nvmf_main.o 00:04:56.264 CC app/iscsi_tgt/iscsi_tgt.o 00:04:56.264 CXX test/cpp_headers/crc32.o 00:04:56.526 CC test/env/pci/pci_ut.o 00:04:56.526 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:56.526 CC test/thread/poller_perf/poller_perf.o 00:04:56.526 CC test/app/stub/stub.o 00:04:56.526 CC test/app/histogram_perf/histogram_perf.o 00:04:56.526 CC test/env/vtophys/vtophys.o 00:04:56.526 CC app/spdk_tgt/spdk_tgt.o 00:04:56.526 CC examples/ioat/verify/verify.o 00:04:56.526 CC test/env/memory/memory_ut.o 00:04:56.526 CC examples/ioat/perf/perf.o 00:04:56.526 CC test/app/jsoncat/jsoncat.o 00:04:56.526 CC examples/util/zipf/zipf.o 00:04:56.526 CC app/fio/nvme/fio_plugin.o 00:04:56.526 CC test/dma/test_dma/test_dma.o 00:04:56.526 CC app/fio/bdev/fio_plugin.o 00:04:56.526 CC test/app/bdev_svc/bdev_svc.o 00:04:56.526 CC test/env/mem_callbacks/mem_callbacks.o 00:04:56.526 LINK spdk_lspci 00:04:56.526 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:56.790 LINK rpc_client_test 00:04:56.790 LINK spdk_nvme_discover 00:04:56.790 LINK jsoncat 00:04:56.790 LINK poller_perf 00:04:56.790 LINK interrupt_tgt 00:04:56.790 LINK histogram_perf 00:04:56.790 LINK nvmf_tgt 00:04:56.790 LINK vtophys 00:04:56.790 LINK zipf 00:04:56.790 CXX test/cpp_headers/crc64.o 00:04:56.790 CXX test/cpp_headers/dif.o 00:04:56.790 CXX test/cpp_headers/dma.o 00:04:56.790 CXX test/cpp_headers/endian.o 00:04:56.790 CXX test/cpp_headers/env_dpdk.o 00:04:56.790 CXX test/cpp_headers/env.o 00:04:56.790 CXX test/cpp_headers/event.o 00:04:56.790 CXX test/cpp_headers/fd_group.o 00:04:56.790 LINK env_dpdk_post_init 00:04:56.790 CXX test/cpp_headers/fd.o 00:04:56.790 CXX test/cpp_headers/file.o 00:04:56.790 CXX test/cpp_headers/fsdev.o 00:04:56.790 LINK stub 00:04:56.790 LINK spdk_trace_record 00:04:56.790 LINK iscsi_tgt 00:04:56.790 CXX test/cpp_headers/fsdev_module.o 00:04:56.790 CXX test/cpp_headers/ftl.o 00:04:56.790 CXX test/cpp_headers/fuse_dispatcher.o 00:04:56.790 CXX test/cpp_headers/gpt_spec.o 00:04:56.790 CXX test/cpp_headers/hexlify.o 00:04:56.790 LINK verify 00:04:56.790 CXX test/cpp_headers/histogram_data.o 00:04:57.061 LINK bdev_svc 00:04:57.061 LINK spdk_tgt 00:04:57.061 LINK ioat_perf 00:04:57.061 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:57.061 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:57.061 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:57.061 LINK mem_callbacks 00:04:57.061 CXX test/cpp_headers/idxd.o 00:04:57.061 CXX test/cpp_headers/idxd_spec.o 00:04:57.061 CXX test/cpp_headers/init.o 00:04:57.061 CXX test/cpp_headers/ioat.o 00:04:57.061 LINK spdk_dd 00:04:57.061 CXX test/cpp_headers/ioat_spec.o 00:04:57.061 CXX test/cpp_headers/iscsi_spec.o 00:04:57.061 CXX test/cpp_headers/json.o 00:04:57.061 CXX test/cpp_headers/jsonrpc.o 00:04:57.061 CXX test/cpp_headers/keyring.o 00:04:57.323 LINK spdk_trace 00:04:57.323 CXX test/cpp_headers/keyring_module.o 00:04:57.323 CXX test/cpp_headers/likely.o 00:04:57.323 CXX test/cpp_headers/log.o 00:04:57.323 CXX test/cpp_headers/lvol.o 00:04:57.323 CXX test/cpp_headers/md5.o 00:04:57.323 CXX test/cpp_headers/memory.o 00:04:57.323 LINK pci_ut 00:04:57.323 CXX test/cpp_headers/mmio.o 00:04:57.323 CXX test/cpp_headers/nbd.o 00:04:57.323 CXX test/cpp_headers/net.o 00:04:57.323 CXX test/cpp_headers/notify.o 00:04:57.323 CXX test/cpp_headers/nvme.o 00:04:57.323 CXX test/cpp_headers/nvme_intel.o 00:04:57.323 CXX test/cpp_headers/nvme_ocssd.o 00:04:57.323 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:57.323 CXX test/cpp_headers/nvme_spec.o 00:04:57.323 CXX test/cpp_headers/nvme_zns.o 00:04:57.323 CXX test/cpp_headers/nvmf_cmd.o 00:04:57.323 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:57.323 CC test/event/event_perf/event_perf.o 00:04:57.323 CC test/event/reactor/reactor.o 00:04:57.323 CC test/event/reactor_perf/reactor_perf.o 00:04:57.323 CXX test/cpp_headers/nvmf.o 00:04:57.323 CXX test/cpp_headers/nvmf_spec.o 00:04:57.323 LINK nvme_fuzz 00:04:57.323 CC test/event/app_repeat/app_repeat.o 00:04:57.589 CXX test/cpp_headers/nvmf_transport.o 00:04:57.589 CC test/event/scheduler/scheduler.o 00:04:57.589 CC examples/thread/thread/thread_ex.o 00:04:57.589 CXX test/cpp_headers/opal.o 00:04:57.589 CXX test/cpp_headers/opal_spec.o 00:04:57.589 CXX test/cpp_headers/pci_ids.o 00:04:57.589 LINK test_dma 00:04:57.589 CC examples/sock/hello_world/hello_sock.o 00:04:57.589 CC examples/vmd/lsvmd/lsvmd.o 00:04:57.589 CXX test/cpp_headers/pipe.o 00:04:57.589 CXX test/cpp_headers/queue.o 00:04:57.589 CC examples/idxd/perf/perf.o 00:04:57.590 CC examples/vmd/led/led.o 00:04:57.590 CXX test/cpp_headers/reduce.o 00:04:57.590 CXX test/cpp_headers/rpc.o 00:04:57.590 CXX test/cpp_headers/scheduler.o 00:04:57.590 CXX test/cpp_headers/scsi.o 00:04:57.590 CXX test/cpp_headers/scsi_spec.o 00:04:57.590 CXX test/cpp_headers/sock.o 00:04:57.590 CXX test/cpp_headers/stdinc.o 00:04:57.590 CXX test/cpp_headers/string.o 00:04:57.590 CXX test/cpp_headers/thread.o 00:04:57.590 CXX test/cpp_headers/trace.o 00:04:57.590 LINK reactor 00:04:57.590 LINK event_perf 00:04:57.590 CXX test/cpp_headers/trace_parser.o 00:04:57.851 CXX test/cpp_headers/tree.o 00:04:57.851 LINK reactor_perf 00:04:57.851 CXX test/cpp_headers/ublk.o 00:04:57.851 CXX test/cpp_headers/util.o 00:04:57.851 CXX test/cpp_headers/uuid.o 00:04:57.851 CXX test/cpp_headers/version.o 00:04:57.851 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.851 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.851 LINK spdk_bdev 00:04:57.851 LINK vhost_fuzz 00:04:57.851 CXX test/cpp_headers/vhost.o 00:04:57.851 LINK app_repeat 00:04:57.851 CXX test/cpp_headers/vmd.o 00:04:57.851 CXX test/cpp_headers/xor.o 00:04:57.851 CXX test/cpp_headers/zipf.o 00:04:57.851 LINK spdk_nvme_perf 00:04:57.851 LINK lsvmd 00:04:57.851 LINK spdk_nvme 00:04:57.851 CC app/vhost/vhost.o 00:04:57.851 LINK spdk_nvme_identify 00:04:57.851 LINK led 00:04:57.851 LINK scheduler 00:04:57.851 LINK memory_ut 00:04:58.111 LINK spdk_top 00:04:58.111 LINK thread 00:04:58.111 LINK hello_sock 00:04:58.111 CC test/nvme/e2edp/nvme_dp.o 00:04:58.111 CC test/nvme/reset/reset.o 00:04:58.111 CC test/nvme/cuse/cuse.o 00:04:58.111 CC test/nvme/connect_stress/connect_stress.o 00:04:58.111 CC test/nvme/overhead/overhead.o 00:04:58.111 CC test/nvme/startup/startup.o 00:04:58.111 CC test/nvme/err_injection/err_injection.o 00:04:58.111 CC test/nvme/aer/aer.o 00:04:58.111 CC test/nvme/fdp/fdp.o 00:04:58.111 CC test/nvme/sgl/sgl.o 00:04:58.111 CC test/nvme/boot_partition/boot_partition.o 00:04:58.111 CC test/nvme/fused_ordering/fused_ordering.o 00:04:58.111 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:58.111 CC test/nvme/compliance/nvme_compliance.o 00:04:58.111 CC test/nvme/reserve/reserve.o 00:04:58.111 CC test/nvme/simple_copy/simple_copy.o 00:04:58.111 LINK vhost 00:04:58.111 CC test/accel/dif/dif.o 00:04:58.111 LINK idxd_perf 00:04:58.371 CC test/blobfs/mkfs/mkfs.o 00:04:58.371 CC test/lvol/esnap/esnap.o 00:04:58.371 LINK connect_stress 00:04:58.371 LINK startup 00:04:58.371 LINK err_injection 00:04:58.371 LINK doorbell_aers 00:04:58.371 LINK boot_partition 00:04:58.371 LINK reserve 00:04:58.371 LINK simple_copy 00:04:58.371 CC examples/nvme/abort/abort.o 00:04:58.371 CC examples/nvme/hotplug/hotplug.o 00:04:58.371 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:58.371 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:58.371 CC examples/nvme/reconnect/reconnect.o 00:04:58.371 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:58.371 LINK fused_ordering 00:04:58.371 CC examples/nvme/arbitration/arbitration.o 00:04:58.371 CC examples/nvme/hello_world/hello_world.o 00:04:58.631 LINK reset 00:04:58.631 CC examples/accel/perf/accel_perf.o 00:04:58.631 LINK nvme_dp 00:04:58.631 LINK sgl 00:04:58.631 LINK mkfs 00:04:58.631 CC examples/blob/hello_world/hello_blob.o 00:04:58.631 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:58.631 LINK overhead 00:04:58.631 LINK aer 00:04:58.631 LINK nvme_compliance 00:04:58.631 CC examples/blob/cli/blobcli.o 00:04:58.631 LINK cmb_copy 00:04:58.890 LINK pmr_persistence 00:04:58.890 LINK fdp 00:04:58.890 LINK hotplug 00:04:58.890 LINK hello_world 00:04:58.890 LINK arbitration 00:04:58.890 LINK abort 00:04:58.890 LINK reconnect 00:04:58.890 LINK hello_blob 00:04:58.890 LINK hello_fsdev 00:04:58.890 LINK dif 00:04:59.149 LINK nvme_manage 00:04:59.149 LINK accel_perf 00:04:59.149 LINK blobcli 00:04:59.408 CC test/bdev/bdevio/bdevio.o 00:04:59.408 LINK iscsi_fuzz 00:04:59.408 CC examples/bdev/hello_world/hello_bdev.o 00:04:59.408 CC examples/bdev/bdevperf/bdevperf.o 00:04:59.667 LINK hello_bdev 00:04:59.925 LINK bdevio 00:04:59.925 LINK cuse 00:05:00.184 LINK bdevperf 00:05:00.751 CC examples/nvmf/nvmf/nvmf.o 00:05:01.010 LINK nvmf 00:05:03.543 LINK esnap 00:05:03.802 00:05:03.802 real 1m7.406s 00:05:03.802 user 9m4.441s 00:05:03.802 sys 1m58.273s 00:05:03.802 10:59:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:03.802 10:59:28 make -- common/autotest_common.sh@10 -- $ set +x 00:05:03.802 ************************************ 00:05:03.802 END TEST make 00:05:03.802 ************************************ 00:05:03.802 10:59:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:03.802 10:59:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:03.802 10:59:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:03.802 10:59:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.802 10:59:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:03.802 10:59:28 -- pm/common@44 -- $ pid=5472 00:05:03.802 10:59:28 -- pm/common@50 -- $ kill -TERM 5472 00:05:03.802 10:59:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.802 10:59:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:03.802 10:59:28 -- pm/common@44 -- $ pid=5474 00:05:03.802 10:59:28 -- pm/common@50 -- $ kill -TERM 5474 00:05:03.802 10:59:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.802 10:59:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:03.802 10:59:28 -- pm/common@44 -- $ pid=5476 00:05:03.802 10:59:28 -- pm/common@50 -- $ kill -TERM 5476 00:05:03.802 10:59:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:03.802 10:59:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:03.802 10:59:28 -- pm/common@44 -- $ pid=5506 00:05:03.802 10:59:28 -- pm/common@50 -- $ sudo -E kill -TERM 5506 00:05:03.802 10:59:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:03.802 10:59:28 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:03.802 10:59:28 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.802 10:59:28 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.802 10:59:28 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.802 10:59:28 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.802 10:59:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.802 10:59:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.802 10:59:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.802 10:59:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.802 10:59:28 -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.802 10:59:28 -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.802 10:59:28 -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.802 10:59:28 -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.802 10:59:28 -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.802 10:59:28 -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.802 10:59:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.802 10:59:28 -- scripts/common.sh@344 -- # case "$op" in 00:05:03.802 10:59:28 -- scripts/common.sh@345 -- # : 1 00:05:03.802 10:59:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.802 10:59:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.802 10:59:28 -- scripts/common.sh@365 -- # decimal 1 00:05:03.802 10:59:28 -- scripts/common.sh@353 -- # local d=1 00:05:03.802 10:59:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.802 10:59:28 -- scripts/common.sh@355 -- # echo 1 00:05:03.802 10:59:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.802 10:59:28 -- scripts/common.sh@366 -- # decimal 2 00:05:03.802 10:59:28 -- scripts/common.sh@353 -- # local d=2 00:05:03.802 10:59:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.802 10:59:28 -- scripts/common.sh@355 -- # echo 2 00:05:03.802 10:59:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.802 10:59:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.803 10:59:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.803 10:59:28 -- scripts/common.sh@368 -- # return 0 00:05:03.803 10:59:28 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.803 10:59:28 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.803 --rc genhtml_branch_coverage=1 00:05:03.803 --rc genhtml_function_coverage=1 00:05:03.803 --rc genhtml_legend=1 00:05:03.803 --rc geninfo_all_blocks=1 00:05:03.803 --rc geninfo_unexecuted_blocks=1 00:05:03.803 00:05:03.803 ' 00:05:03.803 10:59:28 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.803 --rc genhtml_branch_coverage=1 00:05:03.803 --rc genhtml_function_coverage=1 00:05:03.803 --rc genhtml_legend=1 00:05:03.803 --rc geninfo_all_blocks=1 00:05:03.803 --rc geninfo_unexecuted_blocks=1 00:05:03.803 00:05:03.803 ' 00:05:03.803 10:59:28 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.803 --rc genhtml_branch_coverage=1 00:05:03.803 --rc genhtml_function_coverage=1 00:05:03.803 --rc genhtml_legend=1 00:05:03.803 --rc geninfo_all_blocks=1 00:05:03.803 --rc geninfo_unexecuted_blocks=1 00:05:03.803 00:05:03.803 ' 00:05:03.803 10:59:28 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.803 --rc genhtml_branch_coverage=1 00:05:03.803 --rc genhtml_function_coverage=1 00:05:03.803 --rc genhtml_legend=1 00:05:03.803 --rc geninfo_all_blocks=1 00:05:03.803 --rc geninfo_unexecuted_blocks=1 00:05:03.803 00:05:03.803 ' 00:05:03.803 10:59:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:03.803 10:59:28 -- nvmf/common.sh@7 -- # uname -s 00:05:03.803 10:59:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.803 10:59:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.803 10:59:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.803 10:59:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.803 10:59:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.803 10:59:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.803 10:59:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.803 10:59:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.803 10:59:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.803 10:59:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:04.063 10:59:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.063 10:59:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:04.063 10:59:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:04.063 10:59:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:04.063 10:59:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:04.063 10:59:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:04.063 10:59:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:04.063 10:59:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:04.063 10:59:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:04.063 10:59:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:04.063 10:59:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:04.063 10:59:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.063 10:59:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.063 10:59:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.063 10:59:28 -- paths/export.sh@5 -- # export PATH 00:05:04.063 10:59:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:04.063 10:59:28 -- nvmf/common.sh@51 -- # : 0 00:05:04.063 10:59:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:04.063 10:59:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:04.063 10:59:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:04.063 10:59:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:04.063 10:59:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:04.063 10:59:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:04.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:04.063 10:59:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:04.063 10:59:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:04.063 10:59:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:04.063 10:59:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:04.063 10:59:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:04.063 10:59:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:04.063 10:59:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:04.063 10:59:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:04.063 10:59:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:04.063 10:59:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:04.063 10:59:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:04.063 10:59:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:04.063 10:59:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:04.063 10:59:28 -- spdk/autotest.sh@48 -- # udevadm_pid=86245 00:05:04.063 10:59:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:04.063 10:59:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:04.063 10:59:28 -- pm/common@17 -- # local monitor 00:05:04.063 10:59:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.063 10:59:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.063 10:59:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.063 10:59:28 -- pm/common@21 -- # date +%s 00:05:04.063 10:59:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:04.063 10:59:28 -- pm/common@21 -- # date +%s 00:05:04.063 10:59:28 -- pm/common@25 -- # sleep 1 00:05:04.063 10:59:28 -- pm/common@21 -- # date +%s 00:05:04.063 10:59:28 -- pm/common@21 -- # date +%s 00:05:04.063 10:59:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731837568 00:05:04.063 10:59:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731837568 00:05:04.063 10:59:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731837568 00:05:04.063 10:59:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731837568 00:05:04.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731837568_collect-cpu-load.pm.log 00:05:04.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731837568_collect-vmstat.pm.log 00:05:04.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731837568_collect-cpu-temp.pm.log 00:05:04.063 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731837568_collect-bmc-pm.bmc.pm.log 00:05:05.002 10:59:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:05.002 10:59:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:05.002 10:59:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.002 10:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.002 10:59:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:05.002 10:59:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:05.002 10:59:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.002 10:59:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:05.002 10:59:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.002 10:59:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.002 10:59:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:05.002 10:59:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.002 10:59:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:05.002 10:59:29 -- common/autotest_common.sh@1457 -- # uname 00:05:05.002 10:59:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:05.002 10:59:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:05.002 10:59:29 -- common/autotest_common.sh@1477 -- # uname 00:05:05.002 10:59:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:05.002 10:59:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:05.002 10:59:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:05.002 lcov: LCOV version 1.15 00:05:05.003 10:59:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:37.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:37.074 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:43.636 11:00:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:43.636 11:00:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.636 11:00:07 -- common/autotest_common.sh@10 -- # set +x 00:05:43.636 11:00:07 -- spdk/autotest.sh@78 -- # rm -f 00:05:43.636 11:00:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:44.205 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:44.205 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:44.205 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:44.205 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:44.205 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:44.205 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:44.205 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:44.205 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:44.205 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:44.205 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:44.205 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:44.205 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:44.205 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:44.205 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:44.205 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:44.205 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:44.205 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:44.464 11:00:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:44.464 11:00:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:44.464 11:00:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:44.464 11:00:08 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:44.464 11:00:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:44.464 11:00:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:44.464 11:00:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:44.464 11:00:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:44.464 11:00:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:44.464 11:00:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:44.464 11:00:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:44.464 11:00:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:44.464 11:00:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:44.464 11:00:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:44.464 11:00:08 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:44.464 No valid GPT data, bailing 00:05:44.464 11:00:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:44.464 11:00:08 -- scripts/common.sh@394 -- # pt= 00:05:44.464 11:00:08 -- scripts/common.sh@395 -- # return 1 00:05:44.464 11:00:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:44.464 1+0 records in 00:05:44.464 1+0 records out 00:05:44.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00174948 s, 599 MB/s 00:05:44.464 11:00:08 -- spdk/autotest.sh@105 -- # sync 00:05:44.464 11:00:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:44.464 11:00:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:44.464 11:00:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:47.000 11:00:11 -- spdk/autotest.sh@111 -- # uname -s 00:05:47.000 11:00:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:47.000 11:00:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:47.000 11:00:11 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:47.568 Hugepages 00:05:47.568 node hugesize free / total 00:05:47.568 node0 1048576kB 0 / 0 00:05:47.568 node0 2048kB 0 / 0 00:05:47.827 node1 1048576kB 0 / 0 00:05:47.827 node1 2048kB 0 / 0 00:05:47.827 00:05:47.827 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:47.827 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:47.827 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:47.827 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:47.827 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:47.827 11:00:12 -- spdk/autotest.sh@117 -- # uname -s 00:05:47.827 11:00:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:47.827 11:00:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:47.827 11:00:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:49.210 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.210 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:49.210 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:50.151 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:50.151 11:00:14 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:51.539 11:00:15 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:51.539 11:00:15 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:51.539 11:00:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:51.539 11:00:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:51.539 11:00:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:51.539 11:00:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:51.539 11:00:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.539 11:00:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.539 11:00:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:51.539 11:00:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:51.539 11:00:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:51.539 11:00:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:52.480 Waiting for block devices as requested 00:05:52.480 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:52.480 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:52.740 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:52.740 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:52.740 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.002 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:53.002 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:53.002 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:53.002 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:53.262 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:53.262 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:53.262 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:53.522 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:53.522 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:53.522 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:53.522 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:53.781 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:53.781 11:00:18 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:53.781 11:00:18 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:53.781 11:00:18 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:53.781 11:00:18 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:53.781 11:00:18 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:53.781 11:00:18 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:53.781 11:00:18 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:05:53.781 11:00:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:53.781 11:00:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:53.781 11:00:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:53.781 11:00:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:53.781 11:00:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:53.781 11:00:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:53.781 11:00:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:53.781 11:00:18 -- common/autotest_common.sh@1543 -- # continue 00:05:53.781 11:00:18 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:53.781 11:00:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.781 11:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:53.781 11:00:18 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:53.781 11:00:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.781 11:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:53.781 11:00:18 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:55.164 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.164 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:55.164 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:55.164 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:55.164 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:55.164 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:55.425 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:55.425 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:55.425 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:55.425 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:56.368 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:56.368 11:00:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.368 11:00:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.368 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:56.368 11:00:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.368 11:00:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:56.368 11:00:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.368 11:00:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:56.368 11:00:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:56.368 11:00:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:56.368 11:00:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.368 11:00:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:56.368 11:00:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:56.368 11:00:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:56.368 11:00:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.368 11:00:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:56.368 11:00:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:56.368 11:00:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:56.368 11:00:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:56.368 11:00:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.368 11:00:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:56.368 11:00:20 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:05:56.368 11:00:20 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:56.368 11:00:20 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:05:56.368 11:00:20 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:05:56.368 11:00:20 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:05:56.368 11:00:20 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:05:56.368 11:00:20 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=97609 00:05:56.368 11:00:20 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:56.368 11:00:20 -- common/autotest_common.sh@1585 -- # waitforlisten 97609 00:05:56.368 11:00:20 -- common/autotest_common.sh@835 -- # '[' -z 97609 ']' 00:05:56.368 11:00:20 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.368 11:00:20 -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.368 11:00:20 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.368 11:00:20 -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.368 11:00:20 -- common/autotest_common.sh@10 -- # set +x 00:05:56.627 [2024-11-17 11:00:21.058433] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:05:56.627 [2024-11-17 11:00:21.058550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97609 ] 00:05:56.627 [2024-11-17 11:00:21.126678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.627 [2024-11-17 11:00:21.173682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.886 11:00:21 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.886 11:00:21 -- common/autotest_common.sh@868 -- # return 0 00:05:56.886 11:00:21 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:05:56.886 11:00:21 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:05:56.886 11:00:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:06:00.175 nvme0n1 00:06:00.175 11:00:24 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:00.175 [2024-11-17 11:00:24.782314] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:00.175 [2024-11-17 11:00:24.782363] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:00.175 request: 00:06:00.175 { 00:06:00.175 "nvme_ctrlr_name": "nvme0", 00:06:00.175 "password": "test", 00:06:00.175 "method": "bdev_nvme_opal_revert", 00:06:00.175 "req_id": 1 00:06:00.175 } 00:06:00.175 Got JSON-RPC error response 00:06:00.175 response: 00:06:00.175 { 00:06:00.175 "code": -32603, 00:06:00.175 "message": "Internal error" 00:06:00.175 } 00:06:00.175 11:00:24 -- common/autotest_common.sh@1591 -- # true 00:06:00.175 11:00:24 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:00.175 11:00:24 -- common/autotest_common.sh@1595 -- # killprocess 97609 00:06:00.175 11:00:24 -- common/autotest_common.sh@954 -- # '[' -z 97609 ']' 00:06:00.175 11:00:24 -- common/autotest_common.sh@958 -- # kill -0 97609 00:06:00.175 11:00:24 -- common/autotest_common.sh@959 -- # uname 00:06:00.175 11:00:24 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.175 11:00:24 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97609 00:06:00.435 11:00:24 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.435 11:00:24 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.435 11:00:24 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97609' 00:06:00.435 killing process with pid 97609 00:06:00.435 11:00:24 -- common/autotest_common.sh@973 -- # kill 97609 00:06:00.435 11:00:24 -- common/autotest_common.sh@978 -- # wait 97609 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.435 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:00.436 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:02.340 11:00:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:02.340 11:00:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:02.340 11:00:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.340 11:00:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:02.340 11:00:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:02.340 11:00:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.340 11:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.340 11:00:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:02.340 11:00:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:02.340 11:00:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.340 11:00:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.340 11:00:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.340 ************************************ 00:06:02.340 START TEST env 00:06:02.340 ************************************ 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:02.340 * Looking for test storage... 00:06:02.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.340 11:00:26 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.340 11:00:26 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.340 11:00:26 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.340 11:00:26 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.340 11:00:26 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.340 11:00:26 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.340 11:00:26 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.340 11:00:26 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.340 11:00:26 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.340 11:00:26 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.340 11:00:26 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.340 11:00:26 env -- scripts/common.sh@344 -- # case "$op" in 00:06:02.340 11:00:26 env -- scripts/common.sh@345 -- # : 1 00:06:02.340 11:00:26 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.340 11:00:26 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.340 11:00:26 env -- scripts/common.sh@365 -- # decimal 1 00:06:02.340 11:00:26 env -- scripts/common.sh@353 -- # local d=1 00:06:02.340 11:00:26 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.340 11:00:26 env -- scripts/common.sh@355 -- # echo 1 00:06:02.340 11:00:26 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.340 11:00:26 env -- scripts/common.sh@366 -- # decimal 2 00:06:02.340 11:00:26 env -- scripts/common.sh@353 -- # local d=2 00:06:02.340 11:00:26 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.340 11:00:26 env -- scripts/common.sh@355 -- # echo 2 00:06:02.340 11:00:26 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.340 11:00:26 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.340 11:00:26 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.340 11:00:26 env -- scripts/common.sh@368 -- # return 0 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.340 --rc genhtml_branch_coverage=1 00:06:02.340 --rc genhtml_function_coverage=1 00:06:02.340 --rc genhtml_legend=1 00:06:02.340 --rc geninfo_all_blocks=1 00:06:02.340 --rc geninfo_unexecuted_blocks=1 00:06:02.340 00:06:02.340 ' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.340 --rc genhtml_branch_coverage=1 00:06:02.340 --rc genhtml_function_coverage=1 00:06:02.340 --rc genhtml_legend=1 00:06:02.340 --rc geninfo_all_blocks=1 00:06:02.340 --rc geninfo_unexecuted_blocks=1 00:06:02.340 00:06:02.340 ' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.340 --rc genhtml_branch_coverage=1 00:06:02.340 --rc genhtml_function_coverage=1 00:06:02.340 --rc genhtml_legend=1 00:06:02.340 --rc geninfo_all_blocks=1 00:06:02.340 --rc geninfo_unexecuted_blocks=1 00:06:02.340 00:06:02.340 ' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.340 --rc genhtml_branch_coverage=1 00:06:02.340 --rc genhtml_function_coverage=1 00:06:02.340 --rc genhtml_legend=1 00:06:02.340 --rc geninfo_all_blocks=1 00:06:02.340 --rc geninfo_unexecuted_blocks=1 00:06:02.340 00:06:02.340 ' 00:06:02.340 11:00:26 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.340 11:00:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.340 ************************************ 00:06:02.340 START TEST env_memory 00:06:02.340 ************************************ 00:06:02.340 11:00:26 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:02.340 00:06:02.340 00:06:02.340 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.340 http://cunit.sourceforge.net/ 00:06:02.340 00:06:02.340 00:06:02.340 Suite: memory 00:06:02.340 Test: alloc and free memory map ...[2024-11-17 11:00:26.778056] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:02.340 passed 00:06:02.340 Test: mem map translation ...[2024-11-17 11:00:26.797282] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:02.340 [2024-11-17 11:00:26.797303] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:02.340 [2024-11-17 11:00:26.797353] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:02.340 [2024-11-17 11:00:26.797365] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:02.340 passed 00:06:02.340 Test: mem map registration ...[2024-11-17 11:00:26.838814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:02.340 [2024-11-17 11:00:26.838834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:02.340 passed 00:06:02.340 Test: mem map adjacent registrations ...passed 00:06:02.340 00:06:02.340 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.340 suites 1 1 n/a 0 0 00:06:02.340 tests 4 4 4 0 0 00:06:02.340 asserts 152 152 152 0 n/a 00:06:02.340 00:06:02.340 Elapsed time = 0.138 seconds 00:06:02.340 00:06:02.340 real 0m0.146s 00:06:02.340 user 0m0.138s 00:06:02.340 sys 0m0.008s 00:06:02.340 11:00:26 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.340 11:00:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:02.340 ************************************ 00:06:02.340 END TEST env_memory 00:06:02.340 ************************************ 00:06:02.340 11:00:26 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.340 11:00:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.340 11:00:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.340 ************************************ 00:06:02.340 START TEST env_vtophys 00:06:02.340 ************************************ 00:06:02.340 11:00:26 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.340 EAL: lib.eal log level changed from notice to debug 00:06:02.340 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.340 EAL: Detected lcore 1 as core 1 on socket 0 00:06:02.340 EAL: Detected lcore 2 as core 2 on socket 0 00:06:02.340 EAL: Detected lcore 3 as core 3 on socket 0 00:06:02.340 EAL: Detected lcore 4 as core 4 on socket 0 00:06:02.340 EAL: Detected lcore 5 as core 5 on socket 0 00:06:02.340 EAL: Detected lcore 6 as core 8 on socket 0 00:06:02.340 EAL: Detected lcore 7 as core 9 on socket 0 00:06:02.340 EAL: Detected lcore 8 as core 10 on socket 0 00:06:02.340 EAL: Detected lcore 9 as core 11 on socket 0 00:06:02.340 EAL: Detected lcore 10 as core 12 on socket 0 00:06:02.340 EAL: Detected lcore 11 as core 13 on socket 0 00:06:02.340 EAL: Detected lcore 12 as core 0 on socket 1 00:06:02.340 EAL: Detected lcore 13 as core 1 on socket 1 00:06:02.340 EAL: Detected lcore 14 as core 2 on socket 1 00:06:02.340 EAL: Detected lcore 15 as core 3 on socket 1 00:06:02.340 EAL: Detected lcore 16 as core 4 on socket 1 00:06:02.340 EAL: Detected lcore 17 as core 5 on socket 1 00:06:02.340 EAL: Detected lcore 18 as core 8 on socket 1 00:06:02.340 EAL: Detected lcore 19 as core 9 on socket 1 00:06:02.341 EAL: Detected lcore 20 as core 10 on socket 1 00:06:02.341 EAL: Detected lcore 21 as core 11 on socket 1 00:06:02.341 EAL: Detected lcore 22 as core 12 on socket 1 00:06:02.341 EAL: Detected lcore 23 as core 13 on socket 1 00:06:02.341 EAL: Detected lcore 24 as core 0 on socket 0 00:06:02.341 EAL: Detected lcore 25 as core 1 on socket 0 00:06:02.341 EAL: Detected lcore 26 as core 2 on socket 0 00:06:02.341 EAL: Detected lcore 27 as core 3 on socket 0 00:06:02.341 EAL: Detected lcore 28 as core 4 on socket 0 00:06:02.341 EAL: Detected lcore 29 as core 5 on socket 0 00:06:02.341 EAL: Detected lcore 30 as core 8 on socket 0 00:06:02.341 EAL: Detected lcore 31 as core 9 on socket 0 00:06:02.341 EAL: Detected lcore 32 as core 10 on socket 0 00:06:02.341 EAL: Detected lcore 33 as core 11 on socket 0 00:06:02.341 EAL: Detected lcore 34 as core 12 on socket 0 00:06:02.341 EAL: Detected lcore 35 as core 13 on socket 0 00:06:02.341 EAL: Detected lcore 36 as core 0 on socket 1 00:06:02.341 EAL: Detected lcore 37 as core 1 on socket 1 00:06:02.341 EAL: Detected lcore 38 as core 2 on socket 1 00:06:02.341 EAL: Detected lcore 39 as core 3 on socket 1 00:06:02.341 EAL: Detected lcore 40 as core 4 on socket 1 00:06:02.341 EAL: Detected lcore 41 as core 5 on socket 1 00:06:02.341 EAL: Detected lcore 42 as core 8 on socket 1 00:06:02.341 EAL: Detected lcore 43 as core 9 on socket 1 00:06:02.341 EAL: Detected lcore 44 as core 10 on socket 1 00:06:02.341 EAL: Detected lcore 45 as core 11 on socket 1 00:06:02.341 EAL: Detected lcore 46 as core 12 on socket 1 00:06:02.341 EAL: Detected lcore 47 as core 13 on socket 1 00:06:02.341 EAL: Maximum logical cores by configuration: 128 00:06:02.341 EAL: Detected CPU lcores: 48 00:06:02.341 EAL: Detected NUMA nodes: 2 00:06:02.341 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:02.341 EAL: Detected shared linkage of DPDK 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:02.341 EAL: Registered [vdev] bus. 00:06:02.341 EAL: bus.vdev log level changed from disabled to notice 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:02.341 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:02.341 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:02.341 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:02.341 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.341 EAL: No shared files mode enabled, IPC is disabled 00:06:02.341 EAL: Bus pci wants IOVA as 'DC' 00:06:02.341 EAL: Bus vdev wants IOVA as 'DC' 00:06:02.341 EAL: Buses did not request a specific IOVA mode. 00:06:02.341 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:02.341 EAL: Selected IOVA mode 'VA' 00:06:02.341 EAL: Probing VFIO support... 00:06:02.341 EAL: IOMMU type 1 (Type 1) is supported 00:06:02.341 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:02.341 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:02.341 EAL: VFIO support initialized 00:06:02.341 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.341 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.341 EAL: Setting up physically contiguous memory... 00:06:02.341 EAL: Setting maximum number of open files to 524288 00:06:02.341 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.341 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:02.341 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.341 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:02.341 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.341 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:02.341 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.341 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.341 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:02.341 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:02.341 EAL: Hugepages will be freed exactly as allocated. 00:06:02.341 EAL: No shared files mode enabled, IPC is disabled 00:06:02.341 EAL: No shared files mode enabled, IPC is disabled 00:06:02.341 EAL: TSC frequency is ~2700000 KHz 00:06:02.341 EAL: Main lcore 0 is ready (tid=7f33fa40aa00;cpuset=[0]) 00:06:02.341 EAL: Trying to obtain current memory policy. 00:06:02.341 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.341 EAL: Restoring previous memory policy: 0 00:06:02.341 EAL: request: mp_malloc_sync 00:06:02.341 EAL: No shared files mode enabled, IPC is disabled 00:06:02.341 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.341 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.600 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.600 00:06:02.600 00:06:02.600 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.600 http://cunit.sourceforge.net/ 00:06:02.600 00:06:02.600 00:06:02.600 Suite: components_suite 00:06:02.600 Test: vtophys_malloc_test ...passed 00:06:02.600 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:02.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.600 EAL: Restoring previous memory policy: 4 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was expanded by 4MB 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was shrunk by 4MB 00:06:02.600 EAL: Trying to obtain current memory policy. 00:06:02.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.600 EAL: Restoring previous memory policy: 4 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was expanded by 6MB 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was shrunk by 6MB 00:06:02.600 EAL: Trying to obtain current memory policy. 00:06:02.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.600 EAL: Restoring previous memory policy: 4 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was expanded by 10MB 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was shrunk by 10MB 00:06:02.600 EAL: Trying to obtain current memory policy. 00:06:02.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.600 EAL: Restoring previous memory policy: 4 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was expanded by 18MB 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was shrunk by 18MB 00:06:02.600 EAL: Trying to obtain current memory policy. 00:06:02.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.600 EAL: Restoring previous memory policy: 4 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was expanded by 34MB 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was shrunk by 34MB 00:06:02.600 EAL: Trying to obtain current memory policy. 00:06:02.600 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.600 EAL: Restoring previous memory policy: 4 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.600 EAL: request: mp_malloc_sync 00:06:02.600 EAL: No shared files mode enabled, IPC is disabled 00:06:02.600 EAL: Heap on socket 0 was expanded by 66MB 00:06:02.600 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.601 EAL: request: mp_malloc_sync 00:06:02.601 EAL: No shared files mode enabled, IPC is disabled 00:06:02.601 EAL: Heap on socket 0 was shrunk by 66MB 00:06:02.601 EAL: Trying to obtain current memory policy. 00:06:02.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.601 EAL: Restoring previous memory policy: 4 00:06:02.601 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.601 EAL: request: mp_malloc_sync 00:06:02.601 EAL: No shared files mode enabled, IPC is disabled 00:06:02.601 EAL: Heap on socket 0 was expanded by 130MB 00:06:02.601 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.601 EAL: request: mp_malloc_sync 00:06:02.601 EAL: No shared files mode enabled, IPC is disabled 00:06:02.601 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.601 EAL: Trying to obtain current memory policy. 00:06:02.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.601 EAL: Restoring previous memory policy: 4 00:06:02.601 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.601 EAL: request: mp_malloc_sync 00:06:02.601 EAL: No shared files mode enabled, IPC is disabled 00:06:02.601 EAL: Heap on socket 0 was expanded by 258MB 00:06:02.858 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.858 EAL: request: mp_malloc_sync 00:06:02.858 EAL: No shared files mode enabled, IPC is disabled 00:06:02.858 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.858 EAL: Trying to obtain current memory policy. 00:06:02.858 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.858 EAL: Restoring previous memory policy: 4 00:06:02.858 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.859 EAL: request: mp_malloc_sync 00:06:02.859 EAL: No shared files mode enabled, IPC is disabled 00:06:02.859 EAL: Heap on socket 0 was expanded by 514MB 00:06:03.117 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.117 EAL: request: mp_malloc_sync 00:06:03.117 EAL: No shared files mode enabled, IPC is disabled 00:06:03.117 EAL: Heap on socket 0 was shrunk by 514MB 00:06:03.117 EAL: Trying to obtain current memory policy. 00:06:03.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:03.375 EAL: Restoring previous memory policy: 4 00:06:03.375 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.375 EAL: request: mp_malloc_sync 00:06:03.375 EAL: No shared files mode enabled, IPC is disabled 00:06:03.375 EAL: Heap on socket 0 was expanded by 1026MB 00:06:03.633 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.893 EAL: request: mp_malloc_sync 00:06:03.893 EAL: No shared files mode enabled, IPC is disabled 00:06:03.893 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.893 passed 00:06:03.893 00:06:03.893 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.893 suites 1 1 n/a 0 0 00:06:03.893 tests 2 2 2 0 0 00:06:03.893 asserts 497 497 497 0 n/a 00:06:03.893 00:06:03.893 Elapsed time = 1.291 seconds 00:06:03.893 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.893 EAL: request: mp_malloc_sync 00:06:03.893 EAL: No shared files mode enabled, IPC is disabled 00:06:03.893 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.893 EAL: No shared files mode enabled, IPC is disabled 00:06:03.893 EAL: No shared files mode enabled, IPC is disabled 00:06:03.893 EAL: No shared files mode enabled, IPC is disabled 00:06:03.893 00:06:03.893 real 0m1.408s 00:06:03.893 user 0m0.813s 00:06:03.893 sys 0m0.564s 00:06:03.893 11:00:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.893 11:00:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:03.893 ************************************ 00:06:03.893 END TEST env_vtophys 00:06:03.893 ************************************ 00:06:03.893 11:00:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.893 11:00:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.893 11:00:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.893 11:00:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.893 ************************************ 00:06:03.893 START TEST env_pci 00:06:03.893 ************************************ 00:06:03.893 11:00:28 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.893 00:06:03.893 00:06:03.893 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.893 http://cunit.sourceforge.net/ 00:06:03.893 00:06:03.893 00:06:03.893 Suite: pci 00:06:03.893 Test: pci_hook ...[2024-11-17 11:00:28.408426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98509 has claimed it 00:06:03.893 EAL: Cannot find device (10000:00:01.0) 00:06:03.893 EAL: Failed to attach device on primary process 00:06:03.893 passed 00:06:03.893 00:06:03.893 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.893 suites 1 1 n/a 0 0 00:06:03.893 tests 1 1 1 0 0 00:06:03.893 asserts 25 25 25 0 n/a 00:06:03.893 00:06:03.893 Elapsed time = 0.021 seconds 00:06:03.893 00:06:03.893 real 0m0.034s 00:06:03.893 user 0m0.009s 00:06:03.893 sys 0m0.025s 00:06:03.893 11:00:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.893 11:00:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:03.893 ************************************ 00:06:03.893 END TEST env_pci 00:06:03.893 ************************************ 00:06:03.893 11:00:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:03.893 11:00:28 env -- env/env.sh@15 -- # uname 00:06:03.893 11:00:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:03.893 11:00:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:03.893 11:00:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.893 11:00:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:03.893 11:00:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.893 11:00:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:03.893 ************************************ 00:06:03.893 START TEST env_dpdk_post_init 00:06:03.893 ************************************ 00:06:03.893 11:00:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.893 EAL: Detected CPU lcores: 48 00:06:03.893 EAL: Detected NUMA nodes: 2 00:06:03.893 EAL: Detected shared linkage of DPDK 00:06:03.893 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.893 EAL: Selected IOVA mode 'VA' 00:06:03.893 EAL: VFIO support initialized 00:06:03.893 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:04.153 EAL: Using IOMMU type 1 (Type 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:04.153 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:05.095 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:08.381 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:08.381 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:08.381 Starting DPDK initialization... 00:06:08.381 Starting SPDK post initialization... 00:06:08.381 SPDK NVMe probe 00:06:08.382 Attaching to 0000:88:00.0 00:06:08.382 Attached to 0000:88:00.0 00:06:08.382 Cleaning up... 00:06:08.382 00:06:08.382 real 0m4.397s 00:06:08.382 user 0m3.274s 00:06:08.382 sys 0m0.180s 00:06:08.382 11:00:32 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.382 11:00:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.382 ************************************ 00:06:08.382 END TEST env_dpdk_post_init 00:06:08.382 ************************************ 00:06:08.382 11:00:32 env -- env/env.sh@26 -- # uname 00:06:08.382 11:00:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.382 11:00:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.382 11:00:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.382 11:00:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.382 11:00:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.382 ************************************ 00:06:08.382 START TEST env_mem_callbacks 00:06:08.382 ************************************ 00:06:08.382 11:00:32 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.382 EAL: Detected CPU lcores: 48 00:06:08.382 EAL: Detected NUMA nodes: 2 00:06:08.382 EAL: Detected shared linkage of DPDK 00:06:08.382 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.382 EAL: Selected IOVA mode 'VA' 00:06:08.382 EAL: VFIO support initialized 00:06:08.382 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.382 00:06:08.382 00:06:08.382 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.382 http://cunit.sourceforge.net/ 00:06:08.382 00:06:08.382 00:06:08.382 Suite: memory 00:06:08.382 Test: test ... 00:06:08.382 register 0x200000200000 2097152 00:06:08.382 malloc 3145728 00:06:08.382 register 0x200000400000 4194304 00:06:08.382 buf 0x200000500000 len 3145728 PASSED 00:06:08.382 malloc 64 00:06:08.382 buf 0x2000004fff40 len 64 PASSED 00:06:08.382 malloc 4194304 00:06:08.382 register 0x200000800000 6291456 00:06:08.382 buf 0x200000a00000 len 4194304 PASSED 00:06:08.382 free 0x200000500000 3145728 00:06:08.382 free 0x2000004fff40 64 00:06:08.382 unregister 0x200000400000 4194304 PASSED 00:06:08.382 free 0x200000a00000 4194304 00:06:08.382 unregister 0x200000800000 6291456 PASSED 00:06:08.382 malloc 8388608 00:06:08.382 register 0x200000400000 10485760 00:06:08.382 buf 0x200000600000 len 8388608 PASSED 00:06:08.382 free 0x200000600000 8388608 00:06:08.382 unregister 0x200000400000 10485760 PASSED 00:06:08.382 passed 00:06:08.382 00:06:08.382 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.382 suites 1 1 n/a 0 0 00:06:08.382 tests 1 1 1 0 0 00:06:08.382 asserts 15 15 15 0 n/a 00:06:08.382 00:06:08.382 Elapsed time = 0.005 seconds 00:06:08.382 00:06:08.382 real 0m0.046s 00:06:08.382 user 0m0.015s 00:06:08.382 sys 0m0.031s 00:06:08.382 11:00:32 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.382 11:00:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:08.382 ************************************ 00:06:08.382 END TEST env_mem_callbacks 00:06:08.382 ************************************ 00:06:08.382 00:06:08.382 real 0m6.431s 00:06:08.382 user 0m4.437s 00:06:08.382 sys 0m1.042s 00:06:08.382 11:00:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.382 11:00:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.382 ************************************ 00:06:08.382 END TEST env 00:06:08.382 ************************************ 00:06:08.382 11:00:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.382 11:00:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.382 11:00:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.382 11:00:33 -- common/autotest_common.sh@10 -- # set +x 00:06:08.641 ************************************ 00:06:08.641 START TEST rpc 00:06:08.641 ************************************ 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.641 * Looking for test storage... 00:06:08.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.641 11:00:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.641 11:00:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.641 11:00:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.641 11:00:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.641 11:00:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.641 11:00:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:08.641 11:00:33 rpc -- scripts/common.sh@345 -- # : 1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.641 11:00:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.641 11:00:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@353 -- # local d=1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.641 11:00:33 rpc -- scripts/common.sh@355 -- # echo 1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.641 11:00:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@353 -- # local d=2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.641 11:00:33 rpc -- scripts/common.sh@355 -- # echo 2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.641 11:00:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.641 11:00:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.641 11:00:33 rpc -- scripts/common.sh@368 -- # return 0 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.641 --rc genhtml_branch_coverage=1 00:06:08.641 --rc genhtml_function_coverage=1 00:06:08.641 --rc genhtml_legend=1 00:06:08.641 --rc geninfo_all_blocks=1 00:06:08.641 --rc geninfo_unexecuted_blocks=1 00:06:08.641 00:06:08.641 ' 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.641 --rc genhtml_branch_coverage=1 00:06:08.641 --rc genhtml_function_coverage=1 00:06:08.641 --rc genhtml_legend=1 00:06:08.641 --rc geninfo_all_blocks=1 00:06:08.641 --rc geninfo_unexecuted_blocks=1 00:06:08.641 00:06:08.641 ' 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.641 --rc genhtml_branch_coverage=1 00:06:08.641 --rc genhtml_function_coverage=1 00:06:08.641 --rc genhtml_legend=1 00:06:08.641 --rc geninfo_all_blocks=1 00:06:08.641 --rc geninfo_unexecuted_blocks=1 00:06:08.641 00:06:08.641 ' 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.641 --rc genhtml_branch_coverage=1 00:06:08.641 --rc genhtml_function_coverage=1 00:06:08.641 --rc genhtml_legend=1 00:06:08.641 --rc geninfo_all_blocks=1 00:06:08.641 --rc geninfo_unexecuted_blocks=1 00:06:08.641 00:06:08.641 ' 00:06:08.641 11:00:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99173 00:06:08.641 11:00:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:08.641 11:00:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.641 11:00:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99173 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 99173 ']' 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.641 11:00:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.641 [2024-11-17 11:00:33.257231] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:08.641 [2024-11-17 11:00:33.257334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99173 ] 00:06:08.901 [2024-11-17 11:00:33.326163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.901 [2024-11-17 11:00:33.372230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:08.901 [2024-11-17 11:00:33.372300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99173' to capture a snapshot of events at runtime. 00:06:08.901 [2024-11-17 11:00:33.372328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.901 [2024-11-17 11:00:33.372339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.901 [2024-11-17 11:00:33.372348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99173 for offline analysis/debug. 00:06:08.901 [2024-11-17 11:00:33.372994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.161 11:00:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.161 11:00:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.161 11:00:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.161 11:00:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.161 11:00:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:09.161 11:00:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:09.161 11:00:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.161 11:00:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.161 11:00:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.161 ************************************ 00:06:09.161 START TEST rpc_integrity 00:06:09.161 ************************************ 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.161 { 00:06:09.161 "name": "Malloc0", 00:06:09.161 "aliases": [ 00:06:09.161 "ca039040-2e1a-436b-8dae-d444b8d971ce" 00:06:09.161 ], 00:06:09.161 "product_name": "Malloc disk", 00:06:09.161 "block_size": 512, 00:06:09.161 "num_blocks": 16384, 00:06:09.161 "uuid": "ca039040-2e1a-436b-8dae-d444b8d971ce", 00:06:09.161 "assigned_rate_limits": { 00:06:09.161 "rw_ios_per_sec": 0, 00:06:09.161 "rw_mbytes_per_sec": 0, 00:06:09.161 "r_mbytes_per_sec": 0, 00:06:09.161 "w_mbytes_per_sec": 0 00:06:09.161 }, 00:06:09.161 "claimed": false, 00:06:09.161 "zoned": false, 00:06:09.161 "supported_io_types": { 00:06:09.161 "read": true, 00:06:09.161 "write": true, 00:06:09.161 "unmap": true, 00:06:09.161 "flush": true, 00:06:09.161 "reset": true, 00:06:09.161 "nvme_admin": false, 00:06:09.161 "nvme_io": false, 00:06:09.161 "nvme_io_md": false, 00:06:09.161 "write_zeroes": true, 00:06:09.161 "zcopy": true, 00:06:09.161 "get_zone_info": false, 00:06:09.161 "zone_management": false, 00:06:09.161 "zone_append": false, 00:06:09.161 "compare": false, 00:06:09.161 "compare_and_write": false, 00:06:09.161 "abort": true, 00:06:09.161 "seek_hole": false, 00:06:09.161 "seek_data": false, 00:06:09.161 "copy": true, 00:06:09.161 "nvme_iov_md": false 00:06:09.161 }, 00:06:09.161 "memory_domains": [ 00:06:09.161 { 00:06:09.161 "dma_device_id": "system", 00:06:09.161 "dma_device_type": 1 00:06:09.161 }, 00:06:09.161 { 00:06:09.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.161 "dma_device_type": 2 00:06:09.161 } 00:06:09.161 ], 00:06:09.161 "driver_specific": {} 00:06:09.161 } 00:06:09.161 ]' 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.161 [2024-11-17 11:00:33.757717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:09.161 [2024-11-17 11:00:33.757772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.161 [2024-11-17 11:00:33.757795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18778d0 00:06:09.161 [2024-11-17 11:00:33.757809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.161 [2024-11-17 11:00:33.759141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.161 [2024-11-17 11:00:33.759163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.161 Passthru0 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.161 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.161 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.161 { 00:06:09.161 "name": "Malloc0", 00:06:09.161 "aliases": [ 00:06:09.161 "ca039040-2e1a-436b-8dae-d444b8d971ce" 00:06:09.161 ], 00:06:09.161 "product_name": "Malloc disk", 00:06:09.161 "block_size": 512, 00:06:09.161 "num_blocks": 16384, 00:06:09.161 "uuid": "ca039040-2e1a-436b-8dae-d444b8d971ce", 00:06:09.161 "assigned_rate_limits": { 00:06:09.161 "rw_ios_per_sec": 0, 00:06:09.161 "rw_mbytes_per_sec": 0, 00:06:09.161 "r_mbytes_per_sec": 0, 00:06:09.161 "w_mbytes_per_sec": 0 00:06:09.161 }, 00:06:09.161 "claimed": true, 00:06:09.161 "claim_type": "exclusive_write", 00:06:09.161 "zoned": false, 00:06:09.161 "supported_io_types": { 00:06:09.161 "read": true, 00:06:09.162 "write": true, 00:06:09.162 "unmap": true, 00:06:09.162 "flush": true, 00:06:09.162 "reset": true, 00:06:09.162 "nvme_admin": false, 00:06:09.162 "nvme_io": false, 00:06:09.162 "nvme_io_md": false, 00:06:09.162 "write_zeroes": true, 00:06:09.162 "zcopy": true, 00:06:09.162 "get_zone_info": false, 00:06:09.162 "zone_management": false, 00:06:09.162 "zone_append": false, 00:06:09.162 "compare": false, 00:06:09.162 "compare_and_write": false, 00:06:09.162 "abort": true, 00:06:09.162 "seek_hole": false, 00:06:09.162 "seek_data": false, 00:06:09.162 "copy": true, 00:06:09.162 "nvme_iov_md": false 00:06:09.162 }, 00:06:09.162 "memory_domains": [ 00:06:09.162 { 00:06:09.162 "dma_device_id": "system", 00:06:09.162 "dma_device_type": 1 00:06:09.162 }, 00:06:09.162 { 00:06:09.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.162 "dma_device_type": 2 00:06:09.162 } 00:06:09.162 ], 00:06:09.162 "driver_specific": {} 00:06:09.162 }, 00:06:09.162 { 00:06:09.162 "name": "Passthru0", 00:06:09.162 "aliases": [ 00:06:09.162 "c46f665a-0ab3-5ab9-9188-62d744e91643" 00:06:09.162 ], 00:06:09.162 "product_name": "passthru", 00:06:09.162 "block_size": 512, 00:06:09.162 "num_blocks": 16384, 00:06:09.162 "uuid": "c46f665a-0ab3-5ab9-9188-62d744e91643", 00:06:09.162 "assigned_rate_limits": { 00:06:09.162 "rw_ios_per_sec": 0, 00:06:09.162 "rw_mbytes_per_sec": 0, 00:06:09.162 "r_mbytes_per_sec": 0, 00:06:09.162 "w_mbytes_per_sec": 0 00:06:09.162 }, 00:06:09.162 "claimed": false, 00:06:09.162 "zoned": false, 00:06:09.162 "supported_io_types": { 00:06:09.162 "read": true, 00:06:09.162 "write": true, 00:06:09.162 "unmap": true, 00:06:09.162 "flush": true, 00:06:09.162 "reset": true, 00:06:09.162 "nvme_admin": false, 00:06:09.162 "nvme_io": false, 00:06:09.162 "nvme_io_md": false, 00:06:09.162 "write_zeroes": true, 00:06:09.162 "zcopy": true, 00:06:09.162 "get_zone_info": false, 00:06:09.162 "zone_management": false, 00:06:09.162 "zone_append": false, 00:06:09.162 "compare": false, 00:06:09.162 "compare_and_write": false, 00:06:09.162 "abort": true, 00:06:09.162 "seek_hole": false, 00:06:09.162 "seek_data": false, 00:06:09.162 "copy": true, 00:06:09.162 "nvme_iov_md": false 00:06:09.162 }, 00:06:09.162 "memory_domains": [ 00:06:09.162 { 00:06:09.162 "dma_device_id": "system", 00:06:09.162 "dma_device_type": 1 00:06:09.162 }, 00:06:09.162 { 00:06:09.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.162 "dma_device_type": 2 00:06:09.162 } 00:06:09.162 ], 00:06:09.162 "driver_specific": { 00:06:09.162 "passthru": { 00:06:09.162 "name": "Passthru0", 00:06:09.162 "base_bdev_name": "Malloc0" 00:06:09.162 } 00:06:09.162 } 00:06:09.162 } 00:06:09.162 ]' 00:06:09.162 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.162 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.162 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.162 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.162 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.421 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.421 11:00:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.421 00:06:09.421 real 0m0.212s 00:06:09.421 user 0m0.137s 00:06:09.421 sys 0m0.017s 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 ************************************ 00:06:09.421 END TEST rpc_integrity 00:06:09.421 ************************************ 00:06:09.421 11:00:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:09.421 11:00:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.421 11:00:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.421 11:00:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 ************************************ 00:06:09.421 START TEST rpc_plugins 00:06:09.421 ************************************ 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:09.421 { 00:06:09.421 "name": "Malloc1", 00:06:09.421 "aliases": [ 00:06:09.421 "5ed4be43-cd01-4301-8a08-a707d7c6d0d2" 00:06:09.421 ], 00:06:09.421 "product_name": "Malloc disk", 00:06:09.421 "block_size": 4096, 00:06:09.421 "num_blocks": 256, 00:06:09.421 "uuid": "5ed4be43-cd01-4301-8a08-a707d7c6d0d2", 00:06:09.421 "assigned_rate_limits": { 00:06:09.421 "rw_ios_per_sec": 0, 00:06:09.421 "rw_mbytes_per_sec": 0, 00:06:09.421 "r_mbytes_per_sec": 0, 00:06:09.421 "w_mbytes_per_sec": 0 00:06:09.421 }, 00:06:09.421 "claimed": false, 00:06:09.421 "zoned": false, 00:06:09.421 "supported_io_types": { 00:06:09.421 "read": true, 00:06:09.421 "write": true, 00:06:09.421 "unmap": true, 00:06:09.421 "flush": true, 00:06:09.421 "reset": true, 00:06:09.421 "nvme_admin": false, 00:06:09.421 "nvme_io": false, 00:06:09.421 "nvme_io_md": false, 00:06:09.421 "write_zeroes": true, 00:06:09.421 "zcopy": true, 00:06:09.421 "get_zone_info": false, 00:06:09.421 "zone_management": false, 00:06:09.421 "zone_append": false, 00:06:09.421 "compare": false, 00:06:09.421 "compare_and_write": false, 00:06:09.421 "abort": true, 00:06:09.421 "seek_hole": false, 00:06:09.421 "seek_data": false, 00:06:09.421 "copy": true, 00:06:09.421 "nvme_iov_md": false 00:06:09.421 }, 00:06:09.421 "memory_domains": [ 00:06:09.421 { 00:06:09.421 "dma_device_id": "system", 00:06:09.421 "dma_device_type": 1 00:06:09.421 }, 00:06:09.421 { 00:06:09.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.421 "dma_device_type": 2 00:06:09.421 } 00:06:09.421 ], 00:06:09.421 "driver_specific": {} 00:06:09.421 } 00:06:09.421 ]' 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:09.421 11:00:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:09.421 11:00:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:09.421 00:06:09.421 real 0m0.106s 00:06:09.421 user 0m0.069s 00:06:09.421 sys 0m0.006s 00:06:09.421 11:00:34 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.421 11:00:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 ************************************ 00:06:09.421 END TEST rpc_plugins 00:06:09.421 ************************************ 00:06:09.421 11:00:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:09.421 11:00:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.421 11:00:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.421 11:00:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 ************************************ 00:06:09.421 START TEST rpc_trace_cmd_test 00:06:09.421 ************************************ 00:06:09.421 11:00:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:09.421 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:09.421 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:09.421 11:00:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.421 11:00:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.421 11:00:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:09.679 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99173", 00:06:09.679 "tpoint_group_mask": "0x8", 00:06:09.679 "iscsi_conn": { 00:06:09.679 "mask": "0x2", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "scsi": { 00:06:09.679 "mask": "0x4", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "bdev": { 00:06:09.679 "mask": "0x8", 00:06:09.679 "tpoint_mask": "0xffffffffffffffff" 00:06:09.679 }, 00:06:09.679 "nvmf_rdma": { 00:06:09.679 "mask": "0x10", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "nvmf_tcp": { 00:06:09.679 "mask": "0x20", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "ftl": { 00:06:09.679 "mask": "0x40", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "blobfs": { 00:06:09.679 "mask": "0x80", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "dsa": { 00:06:09.679 "mask": "0x200", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "thread": { 00:06:09.679 "mask": "0x400", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "nvme_pcie": { 00:06:09.679 "mask": "0x800", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "iaa": { 00:06:09.679 "mask": "0x1000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "nvme_tcp": { 00:06:09.679 "mask": "0x2000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "bdev_nvme": { 00:06:09.679 "mask": "0x4000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "sock": { 00:06:09.679 "mask": "0x8000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "blob": { 00:06:09.679 "mask": "0x10000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "bdev_raid": { 00:06:09.679 "mask": "0x20000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 }, 00:06:09.679 "scheduler": { 00:06:09.679 "mask": "0x40000", 00:06:09.679 "tpoint_mask": "0x0" 00:06:09.679 } 00:06:09.679 }' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.679 00:06:09.679 real 0m0.182s 00:06:09.679 user 0m0.161s 00:06:09.679 sys 0m0.013s 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.679 11:00:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.679 ************************************ 00:06:09.679 END TEST rpc_trace_cmd_test 00:06:09.679 ************************************ 00:06:09.680 11:00:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:09.680 11:00:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:09.680 11:00:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:09.680 11:00:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.680 11:00:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.680 11:00:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.680 ************************************ 00:06:09.680 START TEST rpc_daemon_integrity 00:06:09.680 ************************************ 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.680 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.938 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.938 { 00:06:09.938 "name": "Malloc2", 00:06:09.938 "aliases": [ 00:06:09.938 "1a31998b-45b6-4a9d-84a4-f581fdc1b329" 00:06:09.938 ], 00:06:09.938 "product_name": "Malloc disk", 00:06:09.938 "block_size": 512, 00:06:09.938 "num_blocks": 16384, 00:06:09.938 "uuid": "1a31998b-45b6-4a9d-84a4-f581fdc1b329", 00:06:09.938 "assigned_rate_limits": { 00:06:09.938 "rw_ios_per_sec": 0, 00:06:09.938 "rw_mbytes_per_sec": 0, 00:06:09.938 "r_mbytes_per_sec": 0, 00:06:09.938 "w_mbytes_per_sec": 0 00:06:09.938 }, 00:06:09.938 "claimed": false, 00:06:09.938 "zoned": false, 00:06:09.938 "supported_io_types": { 00:06:09.938 "read": true, 00:06:09.938 "write": true, 00:06:09.938 "unmap": true, 00:06:09.938 "flush": true, 00:06:09.938 "reset": true, 00:06:09.938 "nvme_admin": false, 00:06:09.938 "nvme_io": false, 00:06:09.938 "nvme_io_md": false, 00:06:09.938 "write_zeroes": true, 00:06:09.938 "zcopy": true, 00:06:09.938 "get_zone_info": false, 00:06:09.938 "zone_management": false, 00:06:09.938 "zone_append": false, 00:06:09.938 "compare": false, 00:06:09.938 "compare_and_write": false, 00:06:09.938 "abort": true, 00:06:09.938 "seek_hole": false, 00:06:09.938 "seek_data": false, 00:06:09.938 "copy": true, 00:06:09.938 "nvme_iov_md": false 00:06:09.938 }, 00:06:09.938 "memory_domains": [ 00:06:09.938 { 00:06:09.938 "dma_device_id": "system", 00:06:09.938 "dma_device_type": 1 00:06:09.938 }, 00:06:09.938 { 00:06:09.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.938 "dma_device_type": 2 00:06:09.938 } 00:06:09.938 ], 00:06:09.938 "driver_specific": {} 00:06:09.938 } 00:06:09.938 ]' 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.939 [2024-11-17 11:00:34.391693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:09.939 [2024-11-17 11:00:34.391735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.939 [2024-11-17 11:00:34.391757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1878560 00:06:09.939 [2024-11-17 11:00:34.391771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.939 [2024-11-17 11:00:34.392976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.939 [2024-11-17 11:00:34.392999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.939 Passthru0 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.939 { 00:06:09.939 "name": "Malloc2", 00:06:09.939 "aliases": [ 00:06:09.939 "1a31998b-45b6-4a9d-84a4-f581fdc1b329" 00:06:09.939 ], 00:06:09.939 "product_name": "Malloc disk", 00:06:09.939 "block_size": 512, 00:06:09.939 "num_blocks": 16384, 00:06:09.939 "uuid": "1a31998b-45b6-4a9d-84a4-f581fdc1b329", 00:06:09.939 "assigned_rate_limits": { 00:06:09.939 "rw_ios_per_sec": 0, 00:06:09.939 "rw_mbytes_per_sec": 0, 00:06:09.939 "r_mbytes_per_sec": 0, 00:06:09.939 "w_mbytes_per_sec": 0 00:06:09.939 }, 00:06:09.939 "claimed": true, 00:06:09.939 "claim_type": "exclusive_write", 00:06:09.939 "zoned": false, 00:06:09.939 "supported_io_types": { 00:06:09.939 "read": true, 00:06:09.939 "write": true, 00:06:09.939 "unmap": true, 00:06:09.939 "flush": true, 00:06:09.939 "reset": true, 00:06:09.939 "nvme_admin": false, 00:06:09.939 "nvme_io": false, 00:06:09.939 "nvme_io_md": false, 00:06:09.939 "write_zeroes": true, 00:06:09.939 "zcopy": true, 00:06:09.939 "get_zone_info": false, 00:06:09.939 "zone_management": false, 00:06:09.939 "zone_append": false, 00:06:09.939 "compare": false, 00:06:09.939 "compare_and_write": false, 00:06:09.939 "abort": true, 00:06:09.939 "seek_hole": false, 00:06:09.939 "seek_data": false, 00:06:09.939 "copy": true, 00:06:09.939 "nvme_iov_md": false 00:06:09.939 }, 00:06:09.939 "memory_domains": [ 00:06:09.939 { 00:06:09.939 "dma_device_id": "system", 00:06:09.939 "dma_device_type": 1 00:06:09.939 }, 00:06:09.939 { 00:06:09.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.939 "dma_device_type": 2 00:06:09.939 } 00:06:09.939 ], 00:06:09.939 "driver_specific": {} 00:06:09.939 }, 00:06:09.939 { 00:06:09.939 "name": "Passthru0", 00:06:09.939 "aliases": [ 00:06:09.939 "d36d52bc-f008-5cab-b4ca-9378c4c52a91" 00:06:09.939 ], 00:06:09.939 "product_name": "passthru", 00:06:09.939 "block_size": 512, 00:06:09.939 "num_blocks": 16384, 00:06:09.939 "uuid": "d36d52bc-f008-5cab-b4ca-9378c4c52a91", 00:06:09.939 "assigned_rate_limits": { 00:06:09.939 "rw_ios_per_sec": 0, 00:06:09.939 "rw_mbytes_per_sec": 0, 00:06:09.939 "r_mbytes_per_sec": 0, 00:06:09.939 "w_mbytes_per_sec": 0 00:06:09.939 }, 00:06:09.939 "claimed": false, 00:06:09.939 "zoned": false, 00:06:09.939 "supported_io_types": { 00:06:09.939 "read": true, 00:06:09.939 "write": true, 00:06:09.939 "unmap": true, 00:06:09.939 "flush": true, 00:06:09.939 "reset": true, 00:06:09.939 "nvme_admin": false, 00:06:09.939 "nvme_io": false, 00:06:09.939 "nvme_io_md": false, 00:06:09.939 "write_zeroes": true, 00:06:09.939 "zcopy": true, 00:06:09.939 "get_zone_info": false, 00:06:09.939 "zone_management": false, 00:06:09.939 "zone_append": false, 00:06:09.939 "compare": false, 00:06:09.939 "compare_and_write": false, 00:06:09.939 "abort": true, 00:06:09.939 "seek_hole": false, 00:06:09.939 "seek_data": false, 00:06:09.939 "copy": true, 00:06:09.939 "nvme_iov_md": false 00:06:09.939 }, 00:06:09.939 "memory_domains": [ 00:06:09.939 { 00:06:09.939 "dma_device_id": "system", 00:06:09.939 "dma_device_type": 1 00:06:09.939 }, 00:06:09.939 { 00:06:09.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.939 "dma_device_type": 2 00:06:09.939 } 00:06:09.939 ], 00:06:09.939 "driver_specific": { 00:06:09.939 "passthru": { 00:06:09.939 "name": "Passthru0", 00:06:09.939 "base_bdev_name": "Malloc2" 00:06:09.939 } 00:06:09.939 } 00:06:09.939 } 00:06:09.939 ]' 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.939 00:06:09.939 real 0m0.211s 00:06:09.939 user 0m0.141s 00:06:09.939 sys 0m0.018s 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.939 11:00:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.939 ************************************ 00:06:09.939 END TEST rpc_daemon_integrity 00:06:09.939 ************************************ 00:06:09.939 11:00:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:09.939 11:00:34 rpc -- rpc/rpc.sh@84 -- # killprocess 99173 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 99173 ']' 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@958 -- # kill -0 99173 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@959 -- # uname 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99173 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99173' 00:06:09.939 killing process with pid 99173 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@973 -- # kill 99173 00:06:09.939 11:00:34 rpc -- common/autotest_common.sh@978 -- # wait 99173 00:06:10.506 00:06:10.506 real 0m1.891s 00:06:10.506 user 0m2.355s 00:06:10.506 sys 0m0.594s 00:06:10.506 11:00:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.506 11:00:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 ************************************ 00:06:10.506 END TEST rpc 00:06:10.506 ************************************ 00:06:10.506 11:00:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.506 11:00:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.506 11:00:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.506 11:00:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 ************************************ 00:06:10.506 START TEST skip_rpc 00:06:10.506 ************************************ 00:06:10.506 11:00:34 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.506 * Looking for test storage... 00:06:10.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.506 11:00:35 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.506 11:00:35 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.506 11:00:35 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.506 11:00:35 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.506 11:00:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.507 11:00:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.507 --rc genhtml_branch_coverage=1 00:06:10.507 --rc genhtml_function_coverage=1 00:06:10.507 --rc genhtml_legend=1 00:06:10.507 --rc geninfo_all_blocks=1 00:06:10.507 --rc geninfo_unexecuted_blocks=1 00:06:10.507 00:06:10.507 ' 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.507 --rc genhtml_branch_coverage=1 00:06:10.507 --rc genhtml_function_coverage=1 00:06:10.507 --rc genhtml_legend=1 00:06:10.507 --rc geninfo_all_blocks=1 00:06:10.507 --rc geninfo_unexecuted_blocks=1 00:06:10.507 00:06:10.507 ' 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.507 --rc genhtml_branch_coverage=1 00:06:10.507 --rc genhtml_function_coverage=1 00:06:10.507 --rc genhtml_legend=1 00:06:10.507 --rc geninfo_all_blocks=1 00:06:10.507 --rc geninfo_unexecuted_blocks=1 00:06:10.507 00:06:10.507 ' 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.507 --rc genhtml_branch_coverage=1 00:06:10.507 --rc genhtml_function_coverage=1 00:06:10.507 --rc genhtml_legend=1 00:06:10.507 --rc geninfo_all_blocks=1 00:06:10.507 --rc geninfo_unexecuted_blocks=1 00:06:10.507 00:06:10.507 ' 00:06:10.507 11:00:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.507 11:00:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.507 11:00:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.507 11:00:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.766 ************************************ 00:06:10.766 START TEST skip_rpc 00:06:10.766 ************************************ 00:06:10.766 11:00:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:10.766 11:00:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99620 00:06:10.766 11:00:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.766 11:00:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.766 11:00:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:10.766 [2024-11-17 11:00:35.220860] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:10.766 [2024-11-17 11:00:35.220922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99620 ] 00:06:10.766 [2024-11-17 11:00:35.285005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.766 [2024-11-17 11:00:35.330582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.039 11:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99620 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 99620 ']' 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 99620 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99620 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99620' 00:06:16.040 killing process with pid 99620 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 99620 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 99620 00:06:16.040 00:06:16.040 real 0m5.418s 00:06:16.040 user 0m5.112s 00:06:16.040 sys 0m0.316s 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.040 11:00:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.040 ************************************ 00:06:16.040 END TEST skip_rpc 00:06:16.040 ************************************ 00:06:16.040 11:00:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:16.040 11:00:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.040 11:00:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.040 11:00:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.040 ************************************ 00:06:16.040 START TEST skip_rpc_with_json 00:06:16.040 ************************************ 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100302 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100302 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 100302 ']' 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.040 11:00:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.299 [2024-11-17 11:00:40.698464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:16.299 [2024-11-17 11:00:40.698585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100302 ] 00:06:16.299 [2024-11-17 11:00:40.765001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.299 [2024-11-17 11:00:40.811751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.558 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.558 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:16.558 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:16.558 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.558 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.558 [2024-11-17 11:00:41.069466] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:16.559 request: 00:06:16.559 { 00:06:16.559 "trtype": "tcp", 00:06:16.559 "method": "nvmf_get_transports", 00:06:16.559 "req_id": 1 00:06:16.559 } 00:06:16.559 Got JSON-RPC error response 00:06:16.559 response: 00:06:16.559 { 00:06:16.559 "code": -19, 00:06:16.559 "message": "No such device" 00:06:16.559 } 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.559 [2024-11-17 11:00:41.077611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.559 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.818 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.818 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.818 { 00:06:16.818 "subsystems": [ 00:06:16.818 { 00:06:16.818 "subsystem": "fsdev", 00:06:16.818 "config": [ 00:06:16.818 { 00:06:16.818 "method": "fsdev_set_opts", 00:06:16.818 "params": { 00:06:16.818 "fsdev_io_pool_size": 65535, 00:06:16.818 "fsdev_io_cache_size": 256 00:06:16.818 } 00:06:16.818 } 00:06:16.818 ] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "vfio_user_target", 00:06:16.818 "config": null 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "keyring", 00:06:16.818 "config": [] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "iobuf", 00:06:16.818 "config": [ 00:06:16.818 { 00:06:16.818 "method": "iobuf_set_options", 00:06:16.818 "params": { 00:06:16.818 "small_pool_count": 8192, 00:06:16.818 "large_pool_count": 1024, 00:06:16.818 "small_bufsize": 8192, 00:06:16.818 "large_bufsize": 135168, 00:06:16.818 "enable_numa": false 00:06:16.818 } 00:06:16.818 } 00:06:16.818 ] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "sock", 00:06:16.818 "config": [ 00:06:16.818 { 00:06:16.818 "method": "sock_set_default_impl", 00:06:16.818 "params": { 00:06:16.818 "impl_name": "posix" 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "sock_impl_set_options", 00:06:16.818 "params": { 00:06:16.818 "impl_name": "ssl", 00:06:16.818 "recv_buf_size": 4096, 00:06:16.818 "send_buf_size": 4096, 00:06:16.818 "enable_recv_pipe": true, 00:06:16.818 "enable_quickack": false, 00:06:16.818 "enable_placement_id": 0, 00:06:16.818 "enable_zerocopy_send_server": true, 00:06:16.818 "enable_zerocopy_send_client": false, 00:06:16.818 "zerocopy_threshold": 0, 00:06:16.818 "tls_version": 0, 00:06:16.818 "enable_ktls": false 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "sock_impl_set_options", 00:06:16.818 "params": { 00:06:16.818 "impl_name": "posix", 00:06:16.818 "recv_buf_size": 2097152, 00:06:16.818 "send_buf_size": 2097152, 00:06:16.818 "enable_recv_pipe": true, 00:06:16.818 "enable_quickack": false, 00:06:16.818 "enable_placement_id": 0, 00:06:16.818 "enable_zerocopy_send_server": true, 00:06:16.818 "enable_zerocopy_send_client": false, 00:06:16.818 "zerocopy_threshold": 0, 00:06:16.818 "tls_version": 0, 00:06:16.818 "enable_ktls": false 00:06:16.818 } 00:06:16.818 } 00:06:16.818 ] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "vmd", 00:06:16.818 "config": [] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "accel", 00:06:16.818 "config": [ 00:06:16.818 { 00:06:16.818 "method": "accel_set_options", 00:06:16.818 "params": { 00:06:16.818 "small_cache_size": 128, 00:06:16.818 "large_cache_size": 16, 00:06:16.818 "task_count": 2048, 00:06:16.818 "sequence_count": 2048, 00:06:16.818 "buf_count": 2048 00:06:16.818 } 00:06:16.818 } 00:06:16.818 ] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "bdev", 00:06:16.818 "config": [ 00:06:16.818 { 00:06:16.818 "method": "bdev_set_options", 00:06:16.818 "params": { 00:06:16.818 "bdev_io_pool_size": 65535, 00:06:16.818 "bdev_io_cache_size": 256, 00:06:16.818 "bdev_auto_examine": true, 00:06:16.818 "iobuf_small_cache_size": 128, 00:06:16.818 "iobuf_large_cache_size": 16 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "bdev_raid_set_options", 00:06:16.818 "params": { 00:06:16.818 "process_window_size_kb": 1024, 00:06:16.818 "process_max_bandwidth_mb_sec": 0 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "bdev_iscsi_set_options", 00:06:16.818 "params": { 00:06:16.818 "timeout_sec": 30 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "bdev_nvme_set_options", 00:06:16.818 "params": { 00:06:16.818 "action_on_timeout": "none", 00:06:16.818 "timeout_us": 0, 00:06:16.818 "timeout_admin_us": 0, 00:06:16.818 "keep_alive_timeout_ms": 10000, 00:06:16.818 "arbitration_burst": 0, 00:06:16.818 "low_priority_weight": 0, 00:06:16.818 "medium_priority_weight": 0, 00:06:16.818 "high_priority_weight": 0, 00:06:16.818 "nvme_adminq_poll_period_us": 10000, 00:06:16.818 "nvme_ioq_poll_period_us": 0, 00:06:16.818 "io_queue_requests": 0, 00:06:16.818 "delay_cmd_submit": true, 00:06:16.818 "transport_retry_count": 4, 00:06:16.818 "bdev_retry_count": 3, 00:06:16.818 "transport_ack_timeout": 0, 00:06:16.818 "ctrlr_loss_timeout_sec": 0, 00:06:16.818 "reconnect_delay_sec": 0, 00:06:16.818 "fast_io_fail_timeout_sec": 0, 00:06:16.818 "disable_auto_failback": false, 00:06:16.818 "generate_uuids": false, 00:06:16.818 "transport_tos": 0, 00:06:16.818 "nvme_error_stat": false, 00:06:16.818 "rdma_srq_size": 0, 00:06:16.818 "io_path_stat": false, 00:06:16.818 "allow_accel_sequence": false, 00:06:16.818 "rdma_max_cq_size": 0, 00:06:16.818 "rdma_cm_event_timeout_ms": 0, 00:06:16.818 "dhchap_digests": [ 00:06:16.818 "sha256", 00:06:16.818 "sha384", 00:06:16.818 "sha512" 00:06:16.818 ], 00:06:16.818 "dhchap_dhgroups": [ 00:06:16.818 "null", 00:06:16.818 "ffdhe2048", 00:06:16.818 "ffdhe3072", 00:06:16.818 "ffdhe4096", 00:06:16.818 "ffdhe6144", 00:06:16.818 "ffdhe8192" 00:06:16.818 ] 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "bdev_nvme_set_hotplug", 00:06:16.818 "params": { 00:06:16.818 "period_us": 100000, 00:06:16.818 "enable": false 00:06:16.818 } 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "method": "bdev_wait_for_examine" 00:06:16.818 } 00:06:16.818 ] 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "scsi", 00:06:16.818 "config": null 00:06:16.818 }, 00:06:16.818 { 00:06:16.818 "subsystem": "scheduler", 00:06:16.818 "config": [ 00:06:16.818 { 00:06:16.818 "method": "framework_set_scheduler", 00:06:16.818 "params": { 00:06:16.818 "name": "static" 00:06:16.818 } 00:06:16.818 } 00:06:16.818 ] 00:06:16.818 }, 00:06:16.819 { 00:06:16.819 "subsystem": "vhost_scsi", 00:06:16.819 "config": [] 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "subsystem": "vhost_blk", 00:06:16.819 "config": [] 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "subsystem": "ublk", 00:06:16.819 "config": [] 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "subsystem": "nbd", 00:06:16.819 "config": [] 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "subsystem": "nvmf", 00:06:16.819 "config": [ 00:06:16.819 { 00:06:16.819 "method": "nvmf_set_config", 00:06:16.819 "params": { 00:06:16.819 "discovery_filter": "match_any", 00:06:16.819 "admin_cmd_passthru": { 00:06:16.819 "identify_ctrlr": false 00:06:16.819 }, 00:06:16.819 "dhchap_digests": [ 00:06:16.819 "sha256", 00:06:16.819 "sha384", 00:06:16.819 "sha512" 00:06:16.819 ], 00:06:16.819 "dhchap_dhgroups": [ 00:06:16.819 "null", 00:06:16.819 "ffdhe2048", 00:06:16.819 "ffdhe3072", 00:06:16.819 "ffdhe4096", 00:06:16.819 "ffdhe6144", 00:06:16.819 "ffdhe8192" 00:06:16.819 ] 00:06:16.819 } 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "method": "nvmf_set_max_subsystems", 00:06:16.819 "params": { 00:06:16.819 "max_subsystems": 1024 00:06:16.819 } 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "method": "nvmf_set_crdt", 00:06:16.819 "params": { 00:06:16.819 "crdt1": 0, 00:06:16.819 "crdt2": 0, 00:06:16.819 "crdt3": 0 00:06:16.819 } 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "method": "nvmf_create_transport", 00:06:16.819 "params": { 00:06:16.819 "trtype": "TCP", 00:06:16.819 "max_queue_depth": 128, 00:06:16.819 "max_io_qpairs_per_ctrlr": 127, 00:06:16.819 "in_capsule_data_size": 4096, 00:06:16.819 "max_io_size": 131072, 00:06:16.819 "io_unit_size": 131072, 00:06:16.819 "max_aq_depth": 128, 00:06:16.819 "num_shared_buffers": 511, 00:06:16.819 "buf_cache_size": 4294967295, 00:06:16.819 "dif_insert_or_strip": false, 00:06:16.819 "zcopy": false, 00:06:16.819 "c2h_success": true, 00:06:16.819 "sock_priority": 0, 00:06:16.819 "abort_timeout_sec": 1, 00:06:16.819 "ack_timeout": 0, 00:06:16.819 "data_wr_pool_size": 0 00:06:16.819 } 00:06:16.819 } 00:06:16.819 ] 00:06:16.819 }, 00:06:16.819 { 00:06:16.819 "subsystem": "iscsi", 00:06:16.819 "config": [ 00:06:16.819 { 00:06:16.819 "method": "iscsi_set_options", 00:06:16.819 "params": { 00:06:16.819 "node_base": "iqn.2016-06.io.spdk", 00:06:16.819 "max_sessions": 128, 00:06:16.819 "max_connections_per_session": 2, 00:06:16.819 "max_queue_depth": 64, 00:06:16.819 "default_time2wait": 2, 00:06:16.819 "default_time2retain": 20, 00:06:16.819 "first_burst_length": 8192, 00:06:16.819 "immediate_data": true, 00:06:16.819 "allow_duplicated_isid": false, 00:06:16.819 "error_recovery_level": 0, 00:06:16.819 "nop_timeout": 60, 00:06:16.819 "nop_in_interval": 30, 00:06:16.819 "disable_chap": false, 00:06:16.819 "require_chap": false, 00:06:16.819 "mutual_chap": false, 00:06:16.819 "chap_group": 0, 00:06:16.819 "max_large_datain_per_connection": 64, 00:06:16.819 "max_r2t_per_connection": 4, 00:06:16.819 "pdu_pool_size": 36864, 00:06:16.819 "immediate_data_pool_size": 16384, 00:06:16.819 "data_out_pool_size": 2048 00:06:16.819 } 00:06:16.819 } 00:06:16.819 ] 00:06:16.819 } 00:06:16.819 ] 00:06:16.819 } 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100302 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100302 ']' 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100302 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100302 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100302' 00:06:16.819 killing process with pid 100302 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100302 00:06:16.819 11:00:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100302 00:06:17.078 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100442 00:06:17.078 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.078 11:00:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100442 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100442 ']' 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100442 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100442 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100442' 00:06:22.344 killing process with pid 100442 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100442 00:06:22.344 11:00:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100442 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.602 00:06:22.602 real 0m6.426s 00:06:22.602 user 0m6.068s 00:06:22.602 sys 0m0.690s 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.602 ************************************ 00:06:22.602 END TEST skip_rpc_with_json 00:06:22.602 ************************************ 00:06:22.602 11:00:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:22.602 11:00:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.602 11:00:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.602 11:00:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.602 ************************************ 00:06:22.602 START TEST skip_rpc_with_delay 00:06:22.602 ************************************ 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.602 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.603 [2024-11-17 11:00:47.179367] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.603 00:06:22.603 real 0m0.074s 00:06:22.603 user 0m0.053s 00:06:22.603 sys 0m0.021s 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.603 11:00:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:22.603 ************************************ 00:06:22.603 END TEST skip_rpc_with_delay 00:06:22.603 ************************************ 00:06:22.603 11:00:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:22.603 11:00:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:22.603 11:00:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:22.603 11:00:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.603 11:00:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.603 11:00:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.603 ************************************ 00:06:22.603 START TEST exit_on_failed_rpc_init 00:06:22.603 ************************************ 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101160 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101160 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 101160 ']' 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.603 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.862 [2024-11-17 11:00:47.302514] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:22.862 [2024-11-17 11:00:47.302646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101160 ] 00:06:22.862 [2024-11-17 11:00:47.368950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.862 [2024-11-17 11:00:47.417895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.120 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:23.121 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:23.121 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.121 [2024-11-17 11:00:47.730706] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:23.121 [2024-11-17 11:00:47.730808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101171 ] 00:06:23.379 [2024-11-17 11:00:47.797685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.379 [2024-11-17 11:00:47.844475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.379 [2024-11-17 11:00:47.844651] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:23.379 [2024-11-17 11:00:47.844671] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:23.379 [2024-11-17 11:00:47.844682] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101160 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 101160 ']' 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 101160 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101160 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101160' 00:06:23.379 killing process with pid 101160 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 101160 00:06:23.379 11:00:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 101160 00:06:23.948 00:06:23.948 real 0m1.068s 00:06:23.948 user 0m1.145s 00:06:23.948 sys 0m0.429s 00:06:23.948 11:00:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.948 11:00:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.948 ************************************ 00:06:23.948 END TEST exit_on_failed_rpc_init 00:06:23.948 ************************************ 00:06:23.948 11:00:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:23.948 00:06:23.948 real 0m13.350s 00:06:23.948 user 0m12.545s 00:06:23.948 sys 0m1.672s 00:06:23.948 11:00:48 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.948 11:00:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.948 ************************************ 00:06:23.948 END TEST skip_rpc 00:06:23.948 ************************************ 00:06:23.948 11:00:48 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:23.948 11:00:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.948 11:00:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.948 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:23.948 ************************************ 00:06:23.948 START TEST rpc_client 00:06:23.948 ************************************ 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:23.948 * Looking for test storage... 00:06:23.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.948 11:00:48 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.948 --rc genhtml_branch_coverage=1 00:06:23.948 --rc genhtml_function_coverage=1 00:06:23.948 --rc genhtml_legend=1 00:06:23.948 --rc geninfo_all_blocks=1 00:06:23.948 --rc geninfo_unexecuted_blocks=1 00:06:23.948 00:06:23.948 ' 00:06:23.948 11:00:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:23.948 OK 00:06:23.948 11:00:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:23.948 00:06:23.948 real 0m0.163s 00:06:23.948 user 0m0.107s 00:06:23.948 sys 0m0.065s 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.948 11:00:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:23.948 ************************************ 00:06:23.948 END TEST rpc_client 00:06:23.948 ************************************ 00:06:23.948 11:00:48 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:23.948 11:00:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.948 11:00:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.948 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:06:23.948 ************************************ 00:06:23.948 START TEST json_config 00:06:23.948 ************************************ 00:06:23.948 11:00:48 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.208 11:00:48 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.208 11:00:48 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.208 11:00:48 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.208 11:00:48 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.208 11:00:48 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.208 11:00:48 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.208 11:00:48 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.208 11:00:48 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.208 11:00:48 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.208 11:00:48 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:24.208 11:00:48 json_config -- scripts/common.sh@345 -- # : 1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.208 11:00:48 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.208 11:00:48 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@353 -- # local d=1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.208 11:00:48 json_config -- scripts/common.sh@355 -- # echo 1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.208 11:00:48 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@353 -- # local d=2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.208 11:00:48 json_config -- scripts/common.sh@355 -- # echo 2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.208 11:00:48 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.208 11:00:48 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.208 11:00:48 json_config -- scripts/common.sh@368 -- # return 0 00:06:24.208 11:00:48 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.208 11:00:48 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.209 --rc genhtml_branch_coverage=1 00:06:24.209 --rc genhtml_function_coverage=1 00:06:24.209 --rc genhtml_legend=1 00:06:24.209 --rc geninfo_all_blocks=1 00:06:24.209 --rc geninfo_unexecuted_blocks=1 00:06:24.209 00:06:24.209 ' 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.209 --rc genhtml_branch_coverage=1 00:06:24.209 --rc genhtml_function_coverage=1 00:06:24.209 --rc genhtml_legend=1 00:06:24.209 --rc geninfo_all_blocks=1 00:06:24.209 --rc geninfo_unexecuted_blocks=1 00:06:24.209 00:06:24.209 ' 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.209 --rc genhtml_branch_coverage=1 00:06:24.209 --rc genhtml_function_coverage=1 00:06:24.209 --rc genhtml_legend=1 00:06:24.209 --rc geninfo_all_blocks=1 00:06:24.209 --rc geninfo_unexecuted_blocks=1 00:06:24.209 00:06:24.209 ' 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.209 --rc genhtml_branch_coverage=1 00:06:24.209 --rc genhtml_function_coverage=1 00:06:24.209 --rc genhtml_legend=1 00:06:24.209 --rc geninfo_all_blocks=1 00:06:24.209 --rc geninfo_unexecuted_blocks=1 00:06:24.209 00:06:24.209 ' 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.209 11:00:48 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.209 11:00:48 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.209 11:00:48 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.209 11:00:48 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.209 11:00:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.209 11:00:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.209 11:00:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.209 11:00:48 json_config -- paths/export.sh@5 -- # export PATH 00:06:24.209 11:00:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@51 -- # : 0 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.209 11:00:48 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:24.209 INFO: JSON configuration test init 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.209 11:00:48 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:24.209 11:00:48 json_config -- json_config/common.sh@9 -- # local app=target 00:06:24.209 11:00:48 json_config -- json_config/common.sh@10 -- # shift 00:06:24.209 11:00:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.209 11:00:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.209 11:00:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.209 11:00:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.209 11:00:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.209 11:00:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101428 00:06:24.209 11:00:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:24.209 11:00:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.209 Waiting for target to run... 00:06:24.209 11:00:48 json_config -- json_config/common.sh@25 -- # waitforlisten 101428 /var/tmp/spdk_tgt.sock 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@835 -- # '[' -z 101428 ']' 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.209 11:00:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.209 [2024-11-17 11:00:48.806194] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:24.209 [2024-11-17 11:00:48.806270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101428 ] 00:06:24.776 [2024-11-17 11:00:49.324774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.776 [2024-11-17 11:00:49.365913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.342 11:00:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.342 11:00:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:25.342 11:00:49 json_config -- json_config/common.sh@26 -- # echo '' 00:06:25.342 00:06:25.342 11:00:49 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:25.342 11:00:49 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:25.342 11:00:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.342 11:00:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.342 11:00:49 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:25.342 11:00:49 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:25.342 11:00:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:25.342 11:00:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.342 11:00:49 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:25.342 11:00:49 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:25.342 11:00:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:28.633 11:00:52 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:28.634 11:00:52 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:28.634 11:00:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.634 11:00:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.634 11:00:52 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:28.634 11:00:52 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:28.634 11:00:52 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:28.634 11:00:52 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:28.634 11:00:52 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:28.634 11:00:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@54 -- # sort 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:28.634 11:00:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:28.634 11:00:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.634 11:00:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.892 11:00:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:28.893 11:00:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.893 11:00:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:28.893 11:00:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:28.893 11:00:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:28.893 MallocForNvmf0 00:06:29.151 11:00:53 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.151 11:00:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.410 MallocForNvmf1 00:06:29.410 11:00:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.410 11:00:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.668 [2024-11-17 11:00:54.070629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.668 11:00:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.668 11:00:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:29.926 11:00:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:29.926 11:00:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.185 11:00:54 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.185 11:00:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.444 11:00:54 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.444 11:00:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.702 [2024-11-17 11:00:55.138110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.702 11:00:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:30.702 11:00:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.702 11:00:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.702 11:00:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:30.702 11:00:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.702 11:00:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.702 11:00:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:30.702 11:00:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:30.702 11:00:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:30.961 MallocBdevForConfigChangeCheck 00:06:30.961 11:00:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:30.961 11:00:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.961 11:00:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.961 11:00:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:30.961 11:00:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.528 11:00:55 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:31.528 INFO: shutting down applications... 00:06:31.528 11:00:55 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:31.528 11:00:55 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:31.529 11:00:55 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:31.529 11:00:55 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.904 Calling clear_iscsi_subsystem 00:06:32.904 Calling clear_nvmf_subsystem 00:06:32.904 Calling clear_nbd_subsystem 00:06:32.904 Calling clear_ublk_subsystem 00:06:32.904 Calling clear_vhost_blk_subsystem 00:06:32.904 Calling clear_vhost_scsi_subsystem 00:06:32.904 Calling clear_bdev_subsystem 00:06:32.904 11:00:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:33.163 11:00:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:33.163 11:00:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:33.163 11:00:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.163 11:00:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:33.163 11:00:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:33.421 11:00:57 json_config -- json_config/json_config.sh@352 -- # break 00:06:33.421 11:00:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:33.421 11:00:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:33.421 11:00:57 json_config -- json_config/common.sh@31 -- # local app=target 00:06:33.422 11:00:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:33.422 11:00:57 json_config -- json_config/common.sh@35 -- # [[ -n 101428 ]] 00:06:33.422 11:00:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101428 00:06:33.422 11:00:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:33.422 11:00:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.422 11:00:57 json_config -- json_config/common.sh@41 -- # kill -0 101428 00:06:33.422 11:00:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.994 11:00:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.994 11:00:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.994 11:00:58 json_config -- json_config/common.sh@41 -- # kill -0 101428 00:06:33.994 11:00:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.994 11:00:58 json_config -- json_config/common.sh@43 -- # break 00:06:33.994 11:00:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.994 11:00:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.994 SPDK target shutdown done 00:06:33.994 11:00:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:33.994 INFO: relaunching applications... 00:06:33.994 11:00:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.994 11:00:58 json_config -- json_config/common.sh@9 -- # local app=target 00:06:33.994 11:00:58 json_config -- json_config/common.sh@10 -- # shift 00:06:33.994 11:00:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.994 11:00:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.994 11:00:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.994 11:00:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.994 11:00:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.994 11:00:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102749 00:06:33.994 11:00:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.994 Waiting for target to run... 00:06:33.994 11:00:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.994 11:00:58 json_config -- json_config/common.sh@25 -- # waitforlisten 102749 /var/tmp/spdk_tgt.sock 00:06:33.994 11:00:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 102749 ']' 00:06:33.994 11:00:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.994 11:00:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.994 11:00:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.994 11:00:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.994 11:00:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.994 [2024-11-17 11:00:58.532792] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:33.994 [2024-11-17 11:00:58.532875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102749 ] 00:06:34.563 [2024-11-17 11:00:59.053241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.563 [2024-11-17 11:00:59.094385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.857 [2024-11-17 11:01:02.143311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.857 [2024-11-17 11:01:02.175851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:37.857 11:01:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.857 11:01:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:37.857 11:01:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:37.857 00:06:37.857 11:01:02 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:37.857 11:01:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:37.857 INFO: Checking if target configuration is the same... 00:06:37.857 11:01:02 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.857 11:01:02 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:37.857 11:01:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:37.857 + '[' 2 -ne 2 ']' 00:06:37.857 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:37.857 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:37.857 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:37.857 +++ basename /dev/fd/62 00:06:37.857 ++ mktemp /tmp/62.XXX 00:06:37.857 + tmp_file_1=/tmp/62.SDa 00:06:37.857 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:37.857 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:37.857 + tmp_file_2=/tmp/spdk_tgt_config.json.Xpm 00:06:37.857 + ret=0 00:06:37.857 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.117 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.117 + diff -u /tmp/62.SDa /tmp/spdk_tgt_config.json.Xpm 00:06:38.117 + echo 'INFO: JSON config files are the same' 00:06:38.117 INFO: JSON config files are the same 00:06:38.117 + rm /tmp/62.SDa /tmp/spdk_tgt_config.json.Xpm 00:06:38.117 + exit 0 00:06:38.117 11:01:02 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:38.117 11:01:02 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:38.117 INFO: changing configuration and checking if this can be detected... 00:06:38.117 11:01:02 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.118 11:01:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:38.377 11:01:02 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.377 11:01:02 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:38.377 11:01:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:38.377 + '[' 2 -ne 2 ']' 00:06:38.377 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:38.377 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:38.377 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:38.377 +++ basename /dev/fd/62 00:06:38.377 ++ mktemp /tmp/62.XXX 00:06:38.377 + tmp_file_1=/tmp/62.2dR 00:06:38.377 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:38.377 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:38.377 + tmp_file_2=/tmp/spdk_tgt_config.json.AFN 00:06:38.377 + ret=0 00:06:38.377 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.944 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:38.944 + diff -u /tmp/62.2dR /tmp/spdk_tgt_config.json.AFN 00:06:38.944 + ret=1 00:06:38.944 + echo '=== Start of file: /tmp/62.2dR ===' 00:06:38.944 + cat /tmp/62.2dR 00:06:38.944 + echo '=== End of file: /tmp/62.2dR ===' 00:06:38.944 + echo '' 00:06:38.944 + echo '=== Start of file: /tmp/spdk_tgt_config.json.AFN ===' 00:06:38.944 + cat /tmp/spdk_tgt_config.json.AFN 00:06:38.944 + echo '=== End of file: /tmp/spdk_tgt_config.json.AFN ===' 00:06:38.944 + echo '' 00:06:38.944 + rm /tmp/62.2dR /tmp/spdk_tgt_config.json.AFN 00:06:38.944 + exit 1 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:38.944 INFO: configuration change detected. 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@324 -- # [[ -n 102749 ]] 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.944 11:01:03 json_config -- json_config/json_config.sh@330 -- # killprocess 102749 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@954 -- # '[' -z 102749 ']' 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@958 -- # kill -0 102749 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@959 -- # uname 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102749 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102749' 00:06:38.944 killing process with pid 102749 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@973 -- # kill 102749 00:06:38.944 11:01:03 json_config -- common/autotest_common.sh@978 -- # wait 102749 00:06:40.848 11:01:05 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:40.849 11:01:05 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:40.849 11:01:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.849 11:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.849 11:01:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:40.849 11:01:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:40.849 INFO: Success 00:06:40.849 00:06:40.849 real 0m16.444s 00:06:40.849 user 0m18.348s 00:06:40.849 sys 0m2.279s 00:06:40.849 11:01:05 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.849 11:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:40.849 ************************************ 00:06:40.849 END TEST json_config 00:06:40.849 ************************************ 00:06:40.849 11:01:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:40.849 11:01:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.849 11:01:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.849 11:01:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.849 ************************************ 00:06:40.849 START TEST json_config_extra_key 00:06:40.849 ************************************ 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.849 --rc genhtml_branch_coverage=1 00:06:40.849 --rc genhtml_function_coverage=1 00:06:40.849 --rc genhtml_legend=1 00:06:40.849 --rc geninfo_all_blocks=1 00:06:40.849 --rc geninfo_unexecuted_blocks=1 00:06:40.849 00:06:40.849 ' 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.849 --rc genhtml_branch_coverage=1 00:06:40.849 --rc genhtml_function_coverage=1 00:06:40.849 --rc genhtml_legend=1 00:06:40.849 --rc geninfo_all_blocks=1 00:06:40.849 --rc geninfo_unexecuted_blocks=1 00:06:40.849 00:06:40.849 ' 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.849 --rc genhtml_branch_coverage=1 00:06:40.849 --rc genhtml_function_coverage=1 00:06:40.849 --rc genhtml_legend=1 00:06:40.849 --rc geninfo_all_blocks=1 00:06:40.849 --rc geninfo_unexecuted_blocks=1 00:06:40.849 00:06:40.849 ' 00:06:40.849 11:01:05 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.849 --rc genhtml_branch_coverage=1 00:06:40.849 --rc genhtml_function_coverage=1 00:06:40.849 --rc genhtml_legend=1 00:06:40.849 --rc geninfo_all_blocks=1 00:06:40.849 --rc geninfo_unexecuted_blocks=1 00:06:40.849 00:06:40.849 ' 00:06:40.849 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.849 11:01:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.849 11:01:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.850 11:01:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.850 11:01:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.850 11:01:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.850 11:01:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:40.850 11:01:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.850 11:01:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:40.850 INFO: launching applications... 00:06:40.850 11:01:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=103670 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.850 Waiting for target to run... 00:06:40.850 11:01:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 103670 /var/tmp/spdk_tgt.sock 00:06:40.850 11:01:05 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 103670 ']' 00:06:40.850 11:01:05 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.850 11:01:05 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.850 11:01:05 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.850 11:01:05 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.850 11:01:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:40.850 [2024-11-17 11:01:05.282004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:40.850 [2024-11-17 11:01:05.282092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103670 ] 00:06:41.418 [2024-11-17 11:01:05.781590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.418 [2024-11-17 11:01:05.818484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.677 11:01:06 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.677 11:01:06 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:41.677 00:06:41.677 11:01:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:41.677 INFO: shutting down applications... 00:06:41.677 11:01:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 103670 ]] 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 103670 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103670 00:06:41.677 11:01:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103670 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:42.244 11:01:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:42.244 SPDK target shutdown done 00:06:42.244 11:01:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:42.244 Success 00:06:42.244 00:06:42.244 real 0m1.678s 00:06:42.244 user 0m1.469s 00:06:42.244 sys 0m0.629s 00:06:42.244 11:01:06 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.244 11:01:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:42.244 ************************************ 00:06:42.244 END TEST json_config_extra_key 00:06:42.244 ************************************ 00:06:42.244 11:01:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.244 11:01:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.244 11:01:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.244 11:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:42.244 ************************************ 00:06:42.244 START TEST alias_rpc 00:06:42.244 ************************************ 00:06:42.244 11:01:06 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:42.244 * Looking for test storage... 00:06:42.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:42.244 11:01:06 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.244 11:01:06 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.244 11:01:06 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.503 11:01:06 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.503 11:01:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:42.503 11:01:06 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.503 11:01:06 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.503 --rc genhtml_branch_coverage=1 00:06:42.503 --rc genhtml_function_coverage=1 00:06:42.503 --rc genhtml_legend=1 00:06:42.503 --rc geninfo_all_blocks=1 00:06:42.503 --rc geninfo_unexecuted_blocks=1 00:06:42.503 00:06:42.503 ' 00:06:42.503 11:01:06 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.503 --rc genhtml_branch_coverage=1 00:06:42.503 --rc genhtml_function_coverage=1 00:06:42.503 --rc genhtml_legend=1 00:06:42.503 --rc geninfo_all_blocks=1 00:06:42.503 --rc geninfo_unexecuted_blocks=1 00:06:42.503 00:06:42.503 ' 00:06:42.503 11:01:06 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.503 --rc genhtml_branch_coverage=1 00:06:42.503 --rc genhtml_function_coverage=1 00:06:42.503 --rc genhtml_legend=1 00:06:42.503 --rc geninfo_all_blocks=1 00:06:42.503 --rc geninfo_unexecuted_blocks=1 00:06:42.503 00:06:42.503 ' 00:06:42.503 11:01:06 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.503 --rc genhtml_branch_coverage=1 00:06:42.503 --rc genhtml_function_coverage=1 00:06:42.503 --rc genhtml_legend=1 00:06:42.503 --rc geninfo_all_blocks=1 00:06:42.503 --rc geninfo_unexecuted_blocks=1 00:06:42.503 00:06:42.503 ' 00:06:42.503 11:01:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.503 11:01:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103870 00:06:42.503 11:01:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.504 11:01:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103870 00:06:42.504 11:01:06 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 103870 ']' 00:06:42.504 11:01:06 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.504 11:01:06 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.504 11:01:06 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.504 11:01:06 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.504 11:01:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.504 [2024-11-17 11:01:07.027333] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:42.504 [2024-11-17 11:01:07.027407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103870 ] 00:06:42.504 [2024-11-17 11:01:07.096401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.504 [2024-11-17 11:01:07.142697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.762 11:01:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.762 11:01:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.762 11:01:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:43.329 11:01:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103870 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 103870 ']' 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 103870 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103870 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103870' 00:06:43.329 killing process with pid 103870 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 103870 00:06:43.329 11:01:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 103870 00:06:43.587 00:06:43.587 real 0m1.275s 00:06:43.587 user 0m1.410s 00:06:43.587 sys 0m0.422s 00:06:43.587 11:01:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.587 11:01:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.587 ************************************ 00:06:43.587 END TEST alias_rpc 00:06:43.587 ************************************ 00:06:43.587 11:01:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:43.587 11:01:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:43.587 11:01:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.587 11:01:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.587 11:01:08 -- common/autotest_common.sh@10 -- # set +x 00:06:43.587 ************************************ 00:06:43.587 START TEST spdkcli_tcp 00:06:43.587 ************************************ 00:06:43.587 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:43.587 * Looking for test storage... 00:06:43.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:43.587 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.587 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.587 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.846 11:01:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.846 --rc genhtml_branch_coverage=1 00:06:43.846 --rc genhtml_function_coverage=1 00:06:43.846 --rc genhtml_legend=1 00:06:43.846 --rc geninfo_all_blocks=1 00:06:43.846 --rc geninfo_unexecuted_blocks=1 00:06:43.846 00:06:43.846 ' 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.846 --rc genhtml_branch_coverage=1 00:06:43.846 --rc genhtml_function_coverage=1 00:06:43.846 --rc genhtml_legend=1 00:06:43.846 --rc geninfo_all_blocks=1 00:06:43.846 --rc geninfo_unexecuted_blocks=1 00:06:43.846 00:06:43.846 ' 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.846 --rc genhtml_branch_coverage=1 00:06:43.846 --rc genhtml_function_coverage=1 00:06:43.846 --rc genhtml_legend=1 00:06:43.846 --rc geninfo_all_blocks=1 00:06:43.846 --rc geninfo_unexecuted_blocks=1 00:06:43.846 00:06:43.846 ' 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.846 --rc genhtml_branch_coverage=1 00:06:43.846 --rc genhtml_function_coverage=1 00:06:43.846 --rc genhtml_legend=1 00:06:43.846 --rc geninfo_all_blocks=1 00:06:43.846 --rc geninfo_unexecuted_blocks=1 00:06:43.846 00:06:43.846 ' 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104131 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:43.846 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104131 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 104131 ']' 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.846 11:01:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:43.846 [2024-11-17 11:01:08.359213] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:43.846 [2024-11-17 11:01:08.359293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104131 ] 00:06:43.846 [2024-11-17 11:01:08.424430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.846 [2024-11-17 11:01:08.471044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.846 [2024-11-17 11:01:08.471049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.105 11:01:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.105 11:01:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:44.105 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104195 00:06:44.105 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:44.105 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:44.363 [ 00:06:44.363 "bdev_malloc_delete", 00:06:44.363 "bdev_malloc_create", 00:06:44.363 "bdev_null_resize", 00:06:44.363 "bdev_null_delete", 00:06:44.363 "bdev_null_create", 00:06:44.363 "bdev_nvme_cuse_unregister", 00:06:44.363 "bdev_nvme_cuse_register", 00:06:44.363 "bdev_opal_new_user", 00:06:44.363 "bdev_opal_set_lock_state", 00:06:44.363 "bdev_opal_delete", 00:06:44.363 "bdev_opal_get_info", 00:06:44.363 "bdev_opal_create", 00:06:44.363 "bdev_nvme_opal_revert", 00:06:44.363 "bdev_nvme_opal_init", 00:06:44.363 "bdev_nvme_send_cmd", 00:06:44.363 "bdev_nvme_set_keys", 00:06:44.363 "bdev_nvme_get_path_iostat", 00:06:44.363 "bdev_nvme_get_mdns_discovery_info", 00:06:44.363 "bdev_nvme_stop_mdns_discovery", 00:06:44.363 "bdev_nvme_start_mdns_discovery", 00:06:44.363 "bdev_nvme_set_multipath_policy", 00:06:44.363 "bdev_nvme_set_preferred_path", 00:06:44.363 "bdev_nvme_get_io_paths", 00:06:44.363 "bdev_nvme_remove_error_injection", 00:06:44.363 "bdev_nvme_add_error_injection", 00:06:44.363 "bdev_nvme_get_discovery_info", 00:06:44.363 "bdev_nvme_stop_discovery", 00:06:44.363 "bdev_nvme_start_discovery", 00:06:44.363 "bdev_nvme_get_controller_health_info", 00:06:44.363 "bdev_nvme_disable_controller", 00:06:44.363 "bdev_nvme_enable_controller", 00:06:44.363 "bdev_nvme_reset_controller", 00:06:44.363 "bdev_nvme_get_transport_statistics", 00:06:44.363 "bdev_nvme_apply_firmware", 00:06:44.363 "bdev_nvme_detach_controller", 00:06:44.363 "bdev_nvme_get_controllers", 00:06:44.363 "bdev_nvme_attach_controller", 00:06:44.363 "bdev_nvme_set_hotplug", 00:06:44.363 "bdev_nvme_set_options", 00:06:44.363 "bdev_passthru_delete", 00:06:44.363 "bdev_passthru_create", 00:06:44.363 "bdev_lvol_set_parent_bdev", 00:06:44.363 "bdev_lvol_set_parent", 00:06:44.363 "bdev_lvol_check_shallow_copy", 00:06:44.363 "bdev_lvol_start_shallow_copy", 00:06:44.363 "bdev_lvol_grow_lvstore", 00:06:44.363 "bdev_lvol_get_lvols", 00:06:44.363 "bdev_lvol_get_lvstores", 00:06:44.363 "bdev_lvol_delete", 00:06:44.363 "bdev_lvol_set_read_only", 00:06:44.363 "bdev_lvol_resize", 00:06:44.363 "bdev_lvol_decouple_parent", 00:06:44.363 "bdev_lvol_inflate", 00:06:44.363 "bdev_lvol_rename", 00:06:44.363 "bdev_lvol_clone_bdev", 00:06:44.363 "bdev_lvol_clone", 00:06:44.363 "bdev_lvol_snapshot", 00:06:44.363 "bdev_lvol_create", 00:06:44.363 "bdev_lvol_delete_lvstore", 00:06:44.363 "bdev_lvol_rename_lvstore", 00:06:44.363 "bdev_lvol_create_lvstore", 00:06:44.363 "bdev_raid_set_options", 00:06:44.363 "bdev_raid_remove_base_bdev", 00:06:44.363 "bdev_raid_add_base_bdev", 00:06:44.363 "bdev_raid_delete", 00:06:44.363 "bdev_raid_create", 00:06:44.363 "bdev_raid_get_bdevs", 00:06:44.363 "bdev_error_inject_error", 00:06:44.363 "bdev_error_delete", 00:06:44.363 "bdev_error_create", 00:06:44.363 "bdev_split_delete", 00:06:44.363 "bdev_split_create", 00:06:44.363 "bdev_delay_delete", 00:06:44.363 "bdev_delay_create", 00:06:44.363 "bdev_delay_update_latency", 00:06:44.363 "bdev_zone_block_delete", 00:06:44.363 "bdev_zone_block_create", 00:06:44.363 "blobfs_create", 00:06:44.363 "blobfs_detect", 00:06:44.363 "blobfs_set_cache_size", 00:06:44.363 "bdev_aio_delete", 00:06:44.363 "bdev_aio_rescan", 00:06:44.363 "bdev_aio_create", 00:06:44.363 "bdev_ftl_set_property", 00:06:44.363 "bdev_ftl_get_properties", 00:06:44.363 "bdev_ftl_get_stats", 00:06:44.363 "bdev_ftl_unmap", 00:06:44.363 "bdev_ftl_unload", 00:06:44.363 "bdev_ftl_delete", 00:06:44.363 "bdev_ftl_load", 00:06:44.363 "bdev_ftl_create", 00:06:44.363 "bdev_virtio_attach_controller", 00:06:44.363 "bdev_virtio_scsi_get_devices", 00:06:44.363 "bdev_virtio_detach_controller", 00:06:44.363 "bdev_virtio_blk_set_hotplug", 00:06:44.363 "bdev_iscsi_delete", 00:06:44.363 "bdev_iscsi_create", 00:06:44.363 "bdev_iscsi_set_options", 00:06:44.363 "accel_error_inject_error", 00:06:44.363 "ioat_scan_accel_module", 00:06:44.363 "dsa_scan_accel_module", 00:06:44.363 "iaa_scan_accel_module", 00:06:44.363 "vfu_virtio_create_fs_endpoint", 00:06:44.363 "vfu_virtio_create_scsi_endpoint", 00:06:44.363 "vfu_virtio_scsi_remove_target", 00:06:44.363 "vfu_virtio_scsi_add_target", 00:06:44.363 "vfu_virtio_create_blk_endpoint", 00:06:44.363 "vfu_virtio_delete_endpoint", 00:06:44.363 "keyring_file_remove_key", 00:06:44.363 "keyring_file_add_key", 00:06:44.363 "keyring_linux_set_options", 00:06:44.363 "fsdev_aio_delete", 00:06:44.363 "fsdev_aio_create", 00:06:44.363 "iscsi_get_histogram", 00:06:44.363 "iscsi_enable_histogram", 00:06:44.363 "iscsi_set_options", 00:06:44.363 "iscsi_get_auth_groups", 00:06:44.363 "iscsi_auth_group_remove_secret", 00:06:44.363 "iscsi_auth_group_add_secret", 00:06:44.363 "iscsi_delete_auth_group", 00:06:44.363 "iscsi_create_auth_group", 00:06:44.363 "iscsi_set_discovery_auth", 00:06:44.363 "iscsi_get_options", 00:06:44.363 "iscsi_target_node_request_logout", 00:06:44.363 "iscsi_target_node_set_redirect", 00:06:44.363 "iscsi_target_node_set_auth", 00:06:44.363 "iscsi_target_node_add_lun", 00:06:44.364 "iscsi_get_stats", 00:06:44.364 "iscsi_get_connections", 00:06:44.364 "iscsi_portal_group_set_auth", 00:06:44.364 "iscsi_start_portal_group", 00:06:44.364 "iscsi_delete_portal_group", 00:06:44.364 "iscsi_create_portal_group", 00:06:44.364 "iscsi_get_portal_groups", 00:06:44.364 "iscsi_delete_target_node", 00:06:44.364 "iscsi_target_node_remove_pg_ig_maps", 00:06:44.364 "iscsi_target_node_add_pg_ig_maps", 00:06:44.364 "iscsi_create_target_node", 00:06:44.364 "iscsi_get_target_nodes", 00:06:44.364 "iscsi_delete_initiator_group", 00:06:44.364 "iscsi_initiator_group_remove_initiators", 00:06:44.364 "iscsi_initiator_group_add_initiators", 00:06:44.364 "iscsi_create_initiator_group", 00:06:44.364 "iscsi_get_initiator_groups", 00:06:44.364 "nvmf_set_crdt", 00:06:44.364 "nvmf_set_config", 00:06:44.364 "nvmf_set_max_subsystems", 00:06:44.364 "nvmf_stop_mdns_prr", 00:06:44.364 "nvmf_publish_mdns_prr", 00:06:44.364 "nvmf_subsystem_get_listeners", 00:06:44.364 "nvmf_subsystem_get_qpairs", 00:06:44.364 "nvmf_subsystem_get_controllers", 00:06:44.364 "nvmf_get_stats", 00:06:44.364 "nvmf_get_transports", 00:06:44.364 "nvmf_create_transport", 00:06:44.364 "nvmf_get_targets", 00:06:44.364 "nvmf_delete_target", 00:06:44.364 "nvmf_create_target", 00:06:44.364 "nvmf_subsystem_allow_any_host", 00:06:44.364 "nvmf_subsystem_set_keys", 00:06:44.364 "nvmf_subsystem_remove_host", 00:06:44.364 "nvmf_subsystem_add_host", 00:06:44.364 "nvmf_ns_remove_host", 00:06:44.364 "nvmf_ns_add_host", 00:06:44.364 "nvmf_subsystem_remove_ns", 00:06:44.364 "nvmf_subsystem_set_ns_ana_group", 00:06:44.364 "nvmf_subsystem_add_ns", 00:06:44.364 "nvmf_subsystem_listener_set_ana_state", 00:06:44.364 "nvmf_discovery_get_referrals", 00:06:44.364 "nvmf_discovery_remove_referral", 00:06:44.364 "nvmf_discovery_add_referral", 00:06:44.364 "nvmf_subsystem_remove_listener", 00:06:44.364 "nvmf_subsystem_add_listener", 00:06:44.364 "nvmf_delete_subsystem", 00:06:44.364 "nvmf_create_subsystem", 00:06:44.364 "nvmf_get_subsystems", 00:06:44.364 "env_dpdk_get_mem_stats", 00:06:44.364 "nbd_get_disks", 00:06:44.364 "nbd_stop_disk", 00:06:44.364 "nbd_start_disk", 00:06:44.364 "ublk_recover_disk", 00:06:44.364 "ublk_get_disks", 00:06:44.364 "ublk_stop_disk", 00:06:44.364 "ublk_start_disk", 00:06:44.364 "ublk_destroy_target", 00:06:44.364 "ublk_create_target", 00:06:44.364 "virtio_blk_create_transport", 00:06:44.364 "virtio_blk_get_transports", 00:06:44.364 "vhost_controller_set_coalescing", 00:06:44.364 "vhost_get_controllers", 00:06:44.364 "vhost_delete_controller", 00:06:44.364 "vhost_create_blk_controller", 00:06:44.364 "vhost_scsi_controller_remove_target", 00:06:44.364 "vhost_scsi_controller_add_target", 00:06:44.364 "vhost_start_scsi_controller", 00:06:44.364 "vhost_create_scsi_controller", 00:06:44.364 "thread_set_cpumask", 00:06:44.364 "scheduler_set_options", 00:06:44.364 "framework_get_governor", 00:06:44.364 "framework_get_scheduler", 00:06:44.364 "framework_set_scheduler", 00:06:44.364 "framework_get_reactors", 00:06:44.364 "thread_get_io_channels", 00:06:44.364 "thread_get_pollers", 00:06:44.364 "thread_get_stats", 00:06:44.364 "framework_monitor_context_switch", 00:06:44.364 "spdk_kill_instance", 00:06:44.364 "log_enable_timestamps", 00:06:44.364 "log_get_flags", 00:06:44.364 "log_clear_flag", 00:06:44.364 "log_set_flag", 00:06:44.364 "log_get_level", 00:06:44.364 "log_set_level", 00:06:44.364 "log_get_print_level", 00:06:44.364 "log_set_print_level", 00:06:44.364 "framework_enable_cpumask_locks", 00:06:44.364 "framework_disable_cpumask_locks", 00:06:44.364 "framework_wait_init", 00:06:44.364 "framework_start_init", 00:06:44.364 "scsi_get_devices", 00:06:44.364 "bdev_get_histogram", 00:06:44.364 "bdev_enable_histogram", 00:06:44.364 "bdev_set_qos_limit", 00:06:44.364 "bdev_set_qd_sampling_period", 00:06:44.364 "bdev_get_bdevs", 00:06:44.364 "bdev_reset_iostat", 00:06:44.364 "bdev_get_iostat", 00:06:44.364 "bdev_examine", 00:06:44.364 "bdev_wait_for_examine", 00:06:44.364 "bdev_set_options", 00:06:44.364 "accel_get_stats", 00:06:44.364 "accel_set_options", 00:06:44.364 "accel_set_driver", 00:06:44.364 "accel_crypto_key_destroy", 00:06:44.364 "accel_crypto_keys_get", 00:06:44.364 "accel_crypto_key_create", 00:06:44.364 "accel_assign_opc", 00:06:44.364 "accel_get_module_info", 00:06:44.364 "accel_get_opc_assignments", 00:06:44.364 "vmd_rescan", 00:06:44.364 "vmd_remove_device", 00:06:44.364 "vmd_enable", 00:06:44.364 "sock_get_default_impl", 00:06:44.364 "sock_set_default_impl", 00:06:44.364 "sock_impl_set_options", 00:06:44.364 "sock_impl_get_options", 00:06:44.364 "iobuf_get_stats", 00:06:44.364 "iobuf_set_options", 00:06:44.364 "keyring_get_keys", 00:06:44.364 "vfu_tgt_set_base_path", 00:06:44.364 "framework_get_pci_devices", 00:06:44.364 "framework_get_config", 00:06:44.364 "framework_get_subsystems", 00:06:44.364 "fsdev_set_opts", 00:06:44.364 "fsdev_get_opts", 00:06:44.364 "trace_get_info", 00:06:44.364 "trace_get_tpoint_group_mask", 00:06:44.364 "trace_disable_tpoint_group", 00:06:44.364 "trace_enable_tpoint_group", 00:06:44.364 "trace_clear_tpoint_mask", 00:06:44.364 "trace_set_tpoint_mask", 00:06:44.364 "notify_get_notifications", 00:06:44.364 "notify_get_types", 00:06:44.364 "spdk_get_version", 00:06:44.364 "rpc_get_methods" 00:06:44.364 ] 00:06:44.364 11:01:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:44.364 11:01:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.364 11:01:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.364 11:01:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:44.364 11:01:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104131 00:06:44.364 11:01:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 104131 ']' 00:06:44.364 11:01:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 104131 00:06:44.364 11:01:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:44.364 11:01:09 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.364 11:01:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104131 00:06:44.622 11:01:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.622 11:01:09 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.622 11:01:09 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104131' 00:06:44.622 killing process with pid 104131 00:06:44.622 11:01:09 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 104131 00:06:44.622 11:01:09 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 104131 00:06:44.881 00:06:44.881 real 0m1.264s 00:06:44.881 user 0m2.255s 00:06:44.881 sys 0m0.483s 00:06:44.881 11:01:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.881 11:01:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.881 ************************************ 00:06:44.881 END TEST spdkcli_tcp 00:06:44.881 ************************************ 00:06:44.881 11:01:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:44.881 11:01:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.881 11:01:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.881 11:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:44.881 ************************************ 00:06:44.881 START TEST dpdk_mem_utility 00:06:44.881 ************************************ 00:06:44.881 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:44.881 * Looking for test storage... 00:06:44.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:44.881 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.881 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.881 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.140 11:01:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.140 --rc genhtml_branch_coverage=1 00:06:45.140 --rc genhtml_function_coverage=1 00:06:45.140 --rc genhtml_legend=1 00:06:45.140 --rc geninfo_all_blocks=1 00:06:45.140 --rc geninfo_unexecuted_blocks=1 00:06:45.140 00:06:45.140 ' 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.140 --rc genhtml_branch_coverage=1 00:06:45.140 --rc genhtml_function_coverage=1 00:06:45.140 --rc genhtml_legend=1 00:06:45.140 --rc geninfo_all_blocks=1 00:06:45.140 --rc geninfo_unexecuted_blocks=1 00:06:45.140 00:06:45.140 ' 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.140 --rc genhtml_branch_coverage=1 00:06:45.140 --rc genhtml_function_coverage=1 00:06:45.140 --rc genhtml_legend=1 00:06:45.140 --rc geninfo_all_blocks=1 00:06:45.140 --rc geninfo_unexecuted_blocks=1 00:06:45.140 00:06:45.140 ' 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.140 --rc genhtml_branch_coverage=1 00:06:45.140 --rc genhtml_function_coverage=1 00:06:45.140 --rc genhtml_legend=1 00:06:45.140 --rc geninfo_all_blocks=1 00:06:45.140 --rc geninfo_unexecuted_blocks=1 00:06:45.140 00:06:45.140 ' 00:06:45.140 11:01:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.140 11:01:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104394 00:06:45.140 11:01:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:45.140 11:01:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104394 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 104394 ']' 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.140 11:01:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.140 [2024-11-17 11:01:09.668907] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:45.140 [2024-11-17 11:01:09.669012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104394 ] 00:06:45.140 [2024-11-17 11:01:09.736174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.140 [2024-11-17 11:01:09.784782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.399 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.399 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:45.399 11:01:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:45.399 11:01:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:45.399 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.399 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:45.399 { 00:06:45.399 "filename": "/tmp/spdk_mem_dump.txt" 00:06:45.399 } 00:06:45.399 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.399 11:01:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:45.658 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:45.658 1 heaps totaling size 810.000000 MiB 00:06:45.658 size: 810.000000 MiB heap id: 0 00:06:45.658 end heaps---------- 00:06:45.658 9 mempools totaling size 595.772034 MiB 00:06:45.658 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:45.658 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:45.658 size: 92.545471 MiB name: bdev_io_104394 00:06:45.658 size: 50.003479 MiB name: msgpool_104394 00:06:45.658 size: 36.509338 MiB name: fsdev_io_104394 00:06:45.658 size: 21.763794 MiB name: PDU_Pool 00:06:45.658 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:45.658 size: 4.133484 MiB name: evtpool_104394 00:06:45.658 size: 0.026123 MiB name: Session_Pool 00:06:45.658 end mempools------- 00:06:45.658 6 memzones totaling size 4.142822 MiB 00:06:45.658 size: 1.000366 MiB name: RG_ring_0_104394 00:06:45.658 size: 1.000366 MiB name: RG_ring_1_104394 00:06:45.658 size: 1.000366 MiB name: RG_ring_4_104394 00:06:45.658 size: 1.000366 MiB name: RG_ring_5_104394 00:06:45.658 size: 0.125366 MiB name: RG_ring_2_104394 00:06:45.658 size: 0.015991 MiB name: RG_ring_3_104394 00:06:45.658 end memzones------- 00:06:45.658 11:01:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:45.658 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:45.658 list of free elements. size: 10.862488 MiB 00:06:45.658 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:45.658 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:45.658 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:45.658 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:45.658 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:45.658 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:45.658 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:45.658 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:45.658 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:45.658 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:45.658 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:45.659 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:45.659 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:45.659 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:45.659 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:45.659 list of standard malloc elements. size: 199.218628 MiB 00:06:45.659 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:45.659 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:45.659 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:45.659 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:45.659 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:45.659 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:45.659 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:45.659 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:45.659 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:45.659 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:45.659 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:45.659 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:45.659 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:45.659 list of memzone associated elements. size: 599.918884 MiB 00:06:45.659 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:45.659 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:45.659 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:45.659 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:45.659 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:45.659 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104394_0 00:06:45.659 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:45.659 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104394_0 00:06:45.659 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:45.659 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104394_0 00:06:45.659 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:45.659 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:45.659 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:45.659 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:45.659 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:45.659 associated memzone info: size: 3.000122 MiB name: MP_evtpool_104394_0 00:06:45.659 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:45.659 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104394 00:06:45.659 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:45.659 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104394 00:06:45.659 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:45.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:45.659 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:45.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:45.659 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:45.659 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:45.659 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:45.659 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:45.659 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:45.659 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104394 00:06:45.659 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:45.659 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104394 00:06:45.659 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:45.659 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104394 00:06:45.659 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:45.659 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104394 00:06:45.659 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:45.659 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104394 00:06:45.659 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:45.659 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104394 00:06:45.659 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:45.659 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:45.659 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:45.659 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:45.659 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:45.659 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:45.659 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:45.659 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_104394 00:06:45.659 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:45.659 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104394 00:06:45.659 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:45.659 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:45.659 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:45.659 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:45.659 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:45.659 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104394 00:06:45.659 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:45.659 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:45.659 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:45.659 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104394 00:06:45.659 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:45.659 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104394 00:06:45.659 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:45.659 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104394 00:06:45.659 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:45.659 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:45.659 11:01:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:45.659 11:01:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104394 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 104394 ']' 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 104394 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104394 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104394' 00:06:45.659 killing process with pid 104394 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 104394 00:06:45.659 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 104394 00:06:46.229 00:06:46.229 real 0m1.110s 00:06:46.229 user 0m1.108s 00:06:46.229 sys 0m0.423s 00:06:46.229 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.229 11:01:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:46.229 ************************************ 00:06:46.229 END TEST dpdk_mem_utility 00:06:46.229 ************************************ 00:06:46.229 11:01:10 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:46.229 11:01:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.229 11:01:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.229 11:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.229 ************************************ 00:06:46.229 START TEST event 00:06:46.229 ************************************ 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:46.229 * Looking for test storage... 00:06:46.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.229 11:01:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.229 11:01:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.229 11:01:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.229 11:01:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.229 11:01:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.229 11:01:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.229 11:01:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.229 11:01:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.229 11:01:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.229 11:01:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.229 11:01:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.229 11:01:10 event -- scripts/common.sh@344 -- # case "$op" in 00:06:46.229 11:01:10 event -- scripts/common.sh@345 -- # : 1 00:06:46.229 11:01:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.229 11:01:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.229 11:01:10 event -- scripts/common.sh@365 -- # decimal 1 00:06:46.229 11:01:10 event -- scripts/common.sh@353 -- # local d=1 00:06:46.229 11:01:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.229 11:01:10 event -- scripts/common.sh@355 -- # echo 1 00:06:46.229 11:01:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.229 11:01:10 event -- scripts/common.sh@366 -- # decimal 2 00:06:46.229 11:01:10 event -- scripts/common.sh@353 -- # local d=2 00:06:46.229 11:01:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.229 11:01:10 event -- scripts/common.sh@355 -- # echo 2 00:06:46.229 11:01:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.229 11:01:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.229 11:01:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.229 11:01:10 event -- scripts/common.sh@368 -- # return 0 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.229 --rc genhtml_branch_coverage=1 00:06:46.229 --rc genhtml_function_coverage=1 00:06:46.229 --rc genhtml_legend=1 00:06:46.229 --rc geninfo_all_blocks=1 00:06:46.229 --rc geninfo_unexecuted_blocks=1 00:06:46.229 00:06:46.229 ' 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.229 --rc genhtml_branch_coverage=1 00:06:46.229 --rc genhtml_function_coverage=1 00:06:46.229 --rc genhtml_legend=1 00:06:46.229 --rc geninfo_all_blocks=1 00:06:46.229 --rc geninfo_unexecuted_blocks=1 00:06:46.229 00:06:46.229 ' 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.229 --rc genhtml_branch_coverage=1 00:06:46.229 --rc genhtml_function_coverage=1 00:06:46.229 --rc genhtml_legend=1 00:06:46.229 --rc geninfo_all_blocks=1 00:06:46.229 --rc geninfo_unexecuted_blocks=1 00:06:46.229 00:06:46.229 ' 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.229 --rc genhtml_branch_coverage=1 00:06:46.229 --rc genhtml_function_coverage=1 00:06:46.229 --rc genhtml_legend=1 00:06:46.229 --rc geninfo_all_blocks=1 00:06:46.229 --rc geninfo_unexecuted_blocks=1 00:06:46.229 00:06:46.229 ' 00:06:46.229 11:01:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:46.229 11:01:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.229 11:01:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:46.229 11:01:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.229 11:01:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.229 ************************************ 00:06:46.229 START TEST event_perf 00:06:46.229 ************************************ 00:06:46.229 11:01:10 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:46.229 Running I/O for 1 seconds...[2024-11-17 11:01:10.816005] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:46.229 [2024-11-17 11:01:10.816081] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104592 ] 00:06:46.489 [2024-11-17 11:01:10.887709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.489 [2024-11-17 11:01:10.937442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.489 [2024-11-17 11:01:10.937555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.489 [2024-11-17 11:01:10.937639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.489 [2024-11-17 11:01:10.937642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.429 Running I/O for 1 seconds... 00:06:47.429 lcore 0: 229785 00:06:47.429 lcore 1: 229784 00:06:47.429 lcore 2: 229784 00:06:47.429 lcore 3: 229784 00:06:47.429 done. 00:06:47.429 00:06:47.429 real 0m1.179s 00:06:47.429 user 0m4.093s 00:06:47.429 sys 0m0.078s 00:06:47.429 11:01:11 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.429 11:01:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.429 ************************************ 00:06:47.429 END TEST event_perf 00:06:47.429 ************************************ 00:06:47.429 11:01:12 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:47.429 11:01:12 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:47.429 11:01:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.429 11:01:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.429 ************************************ 00:06:47.429 START TEST event_reactor 00:06:47.429 ************************************ 00:06:47.429 11:01:12 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:47.429 [2024-11-17 11:01:12.040617] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:47.429 [2024-11-17 11:01:12.040674] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104752 ] 00:06:47.690 [2024-11-17 11:01:12.103513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.690 [2024-11-17 11:01:12.148895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.632 test_start 00:06:48.632 oneshot 00:06:48.632 tick 100 00:06:48.632 tick 100 00:06:48.632 tick 250 00:06:48.632 tick 100 00:06:48.632 tick 100 00:06:48.632 tick 250 00:06:48.632 tick 100 00:06:48.632 tick 500 00:06:48.632 tick 100 00:06:48.632 tick 100 00:06:48.632 tick 250 00:06:48.632 tick 100 00:06:48.632 tick 100 00:06:48.632 test_end 00:06:48.632 00:06:48.632 real 0m1.162s 00:06:48.632 user 0m1.093s 00:06:48.632 sys 0m0.065s 00:06:48.632 11:01:13 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.632 11:01:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:48.632 ************************************ 00:06:48.632 END TEST event_reactor 00:06:48.632 ************************************ 00:06:48.632 11:01:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:48.632 11:01:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:48.632 11:01:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.632 11:01:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.632 ************************************ 00:06:48.632 START TEST event_reactor_perf 00:06:48.632 ************************************ 00:06:48.632 11:01:13 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:48.633 [2024-11-17 11:01:13.253969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:48.633 [2024-11-17 11:01:13.254032] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104912 ] 00:06:48.894 [2024-11-17 11:01:13.319590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.894 [2024-11-17 11:01:13.365792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.832 test_start 00:06:49.832 test_end 00:06:49.832 Performance: 445979 events per second 00:06:49.832 00:06:49.832 real 0m1.168s 00:06:49.832 user 0m1.094s 00:06:49.832 sys 0m0.069s 00:06:49.832 11:01:14 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.832 11:01:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.832 ************************************ 00:06:49.832 END TEST event_reactor_perf 00:06:49.832 ************************************ 00:06:49.832 11:01:14 event -- event/event.sh@49 -- # uname -s 00:06:49.832 11:01:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:49.832 11:01:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:49.832 11:01:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.832 11:01:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.832 11:01:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.832 ************************************ 00:06:49.832 START TEST event_scheduler 00:06:49.832 ************************************ 00:06:49.832 11:01:14 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:50.092 * Looking for test storage... 00:06:50.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.092 11:01:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:50.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.092 --rc genhtml_branch_coverage=1 00:06:50.092 --rc genhtml_function_coverage=1 00:06:50.092 --rc genhtml_legend=1 00:06:50.092 --rc geninfo_all_blocks=1 00:06:50.092 --rc geninfo_unexecuted_blocks=1 00:06:50.092 00:06:50.092 ' 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:50.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.092 --rc genhtml_branch_coverage=1 00:06:50.092 --rc genhtml_function_coverage=1 00:06:50.092 --rc genhtml_legend=1 00:06:50.092 --rc geninfo_all_blocks=1 00:06:50.092 --rc geninfo_unexecuted_blocks=1 00:06:50.092 00:06:50.092 ' 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:50.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.092 --rc genhtml_branch_coverage=1 00:06:50.092 --rc genhtml_function_coverage=1 00:06:50.092 --rc genhtml_legend=1 00:06:50.092 --rc geninfo_all_blocks=1 00:06:50.092 --rc geninfo_unexecuted_blocks=1 00:06:50.092 00:06:50.092 ' 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:50.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.092 --rc genhtml_branch_coverage=1 00:06:50.092 --rc genhtml_function_coverage=1 00:06:50.092 --rc genhtml_legend=1 00:06:50.092 --rc geninfo_all_blocks=1 00:06:50.092 --rc geninfo_unexecuted_blocks=1 00:06:50.092 00:06:50.092 ' 00:06:50.092 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:50.092 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105102 00:06:50.092 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:50.092 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.092 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105102 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 105102 ']' 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.092 11:01:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.092 [2024-11-17 11:01:14.649154] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:50.092 [2024-11-17 11:01:14.649252] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105102 ] 00:06:50.092 [2024-11-17 11:01:14.715931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.352 [2024-11-17 11:01:14.768618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.352 [2024-11-17 11:01:14.768677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.352 [2024-11-17 11:01:14.768742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.352 [2024-11-17 11:01:14.768745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:50.352 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.352 [2024-11-17 11:01:14.893724] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:50.352 [2024-11-17 11:01:14.893751] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:50.352 [2024-11-17 11:01:14.893767] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:50.352 [2024-11-17 11:01:14.893779] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:50.352 [2024-11-17 11:01:14.893790] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.352 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.352 [2024-11-17 11:01:14.987123] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.352 11:01:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.352 11:01:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 ************************************ 00:06:50.611 START TEST scheduler_create_thread 00:06:50.611 ************************************ 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 2 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 3 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 4 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 5 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 6 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 7 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 8 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 9 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 10 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.611 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.183 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.183 00:06:51.183 real 0m0.591s 00:06:51.183 user 0m0.011s 00:06:51.184 sys 0m0.002s 00:06:51.184 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.184 11:01:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.184 ************************************ 00:06:51.184 END TEST scheduler_create_thread 00:06:51.184 ************************************ 00:06:51.184 11:01:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.184 11:01:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105102 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 105102 ']' 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 105102 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105102 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105102' 00:06:51.184 killing process with pid 105102 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 105102 00:06:51.184 11:01:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 105102 00:06:51.443 [2024-11-17 11:01:16.087230] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:51.702 00:06:51.702 real 0m1.799s 00:06:51.702 user 0m2.520s 00:06:51.702 sys 0m0.330s 00:06:51.702 11:01:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.702 11:01:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.702 ************************************ 00:06:51.702 END TEST event_scheduler 00:06:51.702 ************************************ 00:06:51.702 11:01:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:51.702 11:01:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:51.702 11:01:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.702 11:01:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.702 11:01:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.702 ************************************ 00:06:51.702 START TEST app_repeat 00:06:51.702 ************************************ 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105417 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105417' 00:06:51.703 Process app_repeat pid: 105417 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:51.703 spdk_app_start Round 0 00:06:51.703 11:01:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105417 /var/tmp/spdk-nbd.sock 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105417 ']' 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.703 11:01:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.703 [2024-11-17 11:01:16.347857] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:06:51.703 [2024-11-17 11:01:16.347923] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105417 ] 00:06:51.962 [2024-11-17 11:01:16.413857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.962 [2024-11-17 11:01:16.459720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.962 [2024-11-17 11:01:16.459724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.962 11:01:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.962 11:01:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.962 11:01:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.220 Malloc0 00:06:52.220 11:01:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.791 Malloc1 00:06:52.791 11:01:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.791 11:01:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.792 11:01:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.051 /dev/nbd0 00:06:53.051 11:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.051 11:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.051 1+0 records in 00:06:53.051 1+0 records out 00:06:53.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257023 s, 15.9 MB/s 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.051 11:01:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.051 11:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.051 11:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.051 11:01:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.310 /dev/nbd1 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.310 1+0 records in 00:06:53.310 1+0 records out 00:06:53.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229573 s, 17.8 MB/s 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.310 11:01:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.310 11:01:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.568 11:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.568 { 00:06:53.568 "nbd_device": "/dev/nbd0", 00:06:53.569 "bdev_name": "Malloc0" 00:06:53.569 }, 00:06:53.569 { 00:06:53.569 "nbd_device": "/dev/nbd1", 00:06:53.569 "bdev_name": "Malloc1" 00:06:53.569 } 00:06:53.569 ]' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.569 { 00:06:53.569 "nbd_device": "/dev/nbd0", 00:06:53.569 "bdev_name": "Malloc0" 00:06:53.569 }, 00:06:53.569 { 00:06:53.569 "nbd_device": "/dev/nbd1", 00:06:53.569 "bdev_name": "Malloc1" 00:06:53.569 } 00:06:53.569 ]' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.569 /dev/nbd1' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.569 /dev/nbd1' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.569 256+0 records in 00:06:53.569 256+0 records out 00:06:53.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507988 s, 206 MB/s 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.569 256+0 records in 00:06:53.569 256+0 records out 00:06:53.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200363 s, 52.3 MB/s 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.569 256+0 records in 00:06:53.569 256+0 records out 00:06:53.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216335 s, 48.5 MB/s 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.569 11:01:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.137 11:01:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.397 11:01:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.397 11:01:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.397 11:01:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.656 11:01:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.656 11:01:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.916 11:01:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.916 [2024-11-17 11:01:19.554456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.176 [2024-11-17 11:01:19.598153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.176 [2024-11-17 11:01:19.598153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.176 [2024-11-17 11:01:19.652262] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.176 [2024-11-17 11:01:19.652328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.477 11:01:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.477 11:01:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.477 spdk_app_start Round 1 00:06:58.477 11:01:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105417 /var/tmp/spdk-nbd.sock 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105417 ']' 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.477 11:01:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:58.477 11:01:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.477 Malloc0 00:06:58.477 11:01:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.736 Malloc1 00:06:58.736 11:01:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.736 11:01:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.995 /dev/nbd0 00:06:58.995 11:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.995 11:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.995 1+0 records in 00:06:58.995 1+0 records out 00:06:58.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255999 s, 16.0 MB/s 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.995 11:01:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.995 11:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.995 11:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.995 11:01:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.253 /dev/nbd1 00:06:59.253 11:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.253 11:01:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.253 11:01:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.253 1+0 records in 00:06:59.254 1+0 records out 00:06:59.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165895 s, 24.7 MB/s 00:06:59.254 11:01:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.254 11:01:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.254 11:01:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:59.254 11:01:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.254 11:01:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.254 11:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.254 11:01:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.254 11:01:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.254 11:01:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.254 11:01:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.512 11:01:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.512 { 00:06:59.512 "nbd_device": "/dev/nbd0", 00:06:59.512 "bdev_name": "Malloc0" 00:06:59.512 }, 00:06:59.512 { 00:06:59.512 "nbd_device": "/dev/nbd1", 00:06:59.512 "bdev_name": "Malloc1" 00:06:59.512 } 00:06:59.512 ]' 00:06:59.512 11:01:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.512 { 00:06:59.512 "nbd_device": "/dev/nbd0", 00:06:59.512 "bdev_name": "Malloc0" 00:06:59.512 }, 00:06:59.512 { 00:06:59.512 "nbd_device": "/dev/nbd1", 00:06:59.512 "bdev_name": "Malloc1" 00:06:59.512 } 00:06:59.512 ]' 00:06:59.512 11:01:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.771 /dev/nbd1' 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.771 /dev/nbd1' 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.771 256+0 records in 00:06:59.771 256+0 records out 00:06:59.771 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499465 s, 210 MB/s 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.771 11:01:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.771 256+0 records in 00:06:59.771 256+0 records out 00:06:59.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207668 s, 50.5 MB/s 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.772 256+0 records in 00:06:59.772 256+0 records out 00:06:59.772 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224752 s, 46.7 MB/s 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.772 11:01:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.030 11:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.031 11:01:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.289 11:01:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.548 11:01:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.548 11:01:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.807 11:01:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.068 [2024-11-17 11:01:25.633947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.068 [2024-11-17 11:01:25.677124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.068 [2024-11-17 11:01:25.677124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.329 [2024-11-17 11:01:25.736501] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.329 [2024-11-17 11:01:25.736600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.872 11:01:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.872 11:01:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.872 spdk_app_start Round 2 00:07:03.873 11:01:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105417 /var/tmp/spdk-nbd.sock 00:07:03.873 11:01:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105417 ']' 00:07:03.873 11:01:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.873 11:01:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.873 11:01:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.873 11:01:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.873 11:01:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.132 11:01:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.132 11:01:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.132 11:01:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.390 Malloc0 00:07:04.390 11:01:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.650 Malloc1 00:07:04.650 11:01:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.650 11:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.218 /dev/nbd0 00:07:05.218 11:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.218 11:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.218 11:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.218 1+0 records in 00:07:05.218 1+0 records out 00:07:05.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192092 s, 21.3 MB/s 00:07:05.219 11:01:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.219 11:01:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.219 11:01:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.219 11:01:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.219 11:01:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.219 11:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.219 11:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.219 11:01:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.478 /dev/nbd1 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.478 1+0 records in 00:07:05.478 1+0 records out 00:07:05.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206424 s, 19.8 MB/s 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.478 11:01:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.478 11:01:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.736 { 00:07:05.736 "nbd_device": "/dev/nbd0", 00:07:05.736 "bdev_name": "Malloc0" 00:07:05.736 }, 00:07:05.736 { 00:07:05.736 "nbd_device": "/dev/nbd1", 00:07:05.736 "bdev_name": "Malloc1" 00:07:05.736 } 00:07:05.736 ]' 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.736 { 00:07:05.736 "nbd_device": "/dev/nbd0", 00:07:05.736 "bdev_name": "Malloc0" 00:07:05.736 }, 00:07:05.736 { 00:07:05.736 "nbd_device": "/dev/nbd1", 00:07:05.736 "bdev_name": "Malloc1" 00:07:05.736 } 00:07:05.736 ]' 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.736 /dev/nbd1' 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.736 /dev/nbd1' 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.736 11:01:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.737 256+0 records in 00:07:05.737 256+0 records out 00:07:05.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510803 s, 205 MB/s 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.737 256+0 records in 00:07:05.737 256+0 records out 00:07:05.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215625 s, 48.6 MB/s 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.737 256+0 records in 00:07:05.737 256+0 records out 00:07:05.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223713 s, 46.9 MB/s 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.737 11:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.996 11:01:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.566 11:01:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.825 11:01:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.825 11:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.825 11:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.825 11:01:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.825 11:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.826 11:01:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.826 11:01:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.086 11:01:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.346 [2024-11-17 11:01:31.745706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.346 [2024-11-17 11:01:31.789542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.346 [2024-11-17 11:01:31.789547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.346 [2024-11-17 11:01:31.843284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.346 [2024-11-17 11:01:31.843347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.907 11:01:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105417 /var/tmp/spdk-nbd.sock 00:07:09.907 11:01:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105417 ']' 00:07:09.907 11:01:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.907 11:01:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.907 11:01:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.907 11:01:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.907 11:01:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.168 11:01:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.168 11:01:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.168 11:01:34 event.app_repeat -- event/event.sh@39 -- # killprocess 105417 00:07:10.168 11:01:34 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 105417 ']' 00:07:10.168 11:01:34 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 105417 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105417 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105417' 00:07:10.427 killing process with pid 105417 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@973 -- # kill 105417 00:07:10.427 11:01:34 event.app_repeat -- common/autotest_common.sh@978 -- # wait 105417 00:07:10.427 spdk_app_start is called in Round 0. 00:07:10.427 Shutdown signal received, stop current app iteration 00:07:10.427 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:07:10.427 spdk_app_start is called in Round 1. 00:07:10.427 Shutdown signal received, stop current app iteration 00:07:10.427 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:07:10.427 spdk_app_start is called in Round 2. 00:07:10.427 Shutdown signal received, stop current app iteration 00:07:10.427 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization... 00:07:10.427 spdk_app_start is called in Round 3. 00:07:10.427 Shutdown signal received, stop current app iteration 00:07:10.427 11:01:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.427 11:01:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.427 00:07:10.427 real 0m18.712s 00:07:10.427 user 0m41.382s 00:07:10.427 sys 0m3.317s 00:07:10.428 11:01:35 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.428 11:01:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.428 ************************************ 00:07:10.428 END TEST app_repeat 00:07:10.428 ************************************ 00:07:10.428 11:01:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.428 11:01:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.428 11:01:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.428 11:01:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.428 11:01:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.687 ************************************ 00:07:10.687 START TEST cpu_locks 00:07:10.687 ************************************ 00:07:10.687 11:01:35 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:10.687 * Looking for test storage... 00:07:10.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:10.687 11:01:35 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.687 11:01:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.687 11:01:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.687 11:01:35 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.687 11:01:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.688 11:01:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.688 --rc genhtml_branch_coverage=1 00:07:10.688 --rc genhtml_function_coverage=1 00:07:10.688 --rc genhtml_legend=1 00:07:10.688 --rc geninfo_all_blocks=1 00:07:10.688 --rc geninfo_unexecuted_blocks=1 00:07:10.688 00:07:10.688 ' 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.688 --rc genhtml_branch_coverage=1 00:07:10.688 --rc genhtml_function_coverage=1 00:07:10.688 --rc genhtml_legend=1 00:07:10.688 --rc geninfo_all_blocks=1 00:07:10.688 --rc geninfo_unexecuted_blocks=1 00:07:10.688 00:07:10.688 ' 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.688 --rc genhtml_branch_coverage=1 00:07:10.688 --rc genhtml_function_coverage=1 00:07:10.688 --rc genhtml_legend=1 00:07:10.688 --rc geninfo_all_blocks=1 00:07:10.688 --rc geninfo_unexecuted_blocks=1 00:07:10.688 00:07:10.688 ' 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.688 --rc genhtml_branch_coverage=1 00:07:10.688 --rc genhtml_function_coverage=1 00:07:10.688 --rc genhtml_legend=1 00:07:10.688 --rc geninfo_all_blocks=1 00:07:10.688 --rc geninfo_unexecuted_blocks=1 00:07:10.688 00:07:10.688 ' 00:07:10.688 11:01:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.688 11:01:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.688 11:01:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.688 11:01:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.688 11:01:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.688 ************************************ 00:07:10.688 START TEST default_locks 00:07:10.688 ************************************ 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107878 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 107878 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107878 ']' 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.688 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.688 [2024-11-17 11:01:35.320051] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:10.688 [2024-11-17 11:01:35.320130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107878 ] 00:07:10.950 [2024-11-17 11:01:35.386492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.950 [2024-11-17 11:01:35.431631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.208 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.208 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:11.208 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 107878 00:07:11.208 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 107878 00:07:11.208 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.469 lslocks: write error 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 107878 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 107878 ']' 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 107878 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107878 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107878' 00:07:11.469 killing process with pid 107878 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 107878 00:07:11.469 11:01:35 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 107878 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107878 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 107878 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 107878 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107878 ']' 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (107878) - No such process 00:07:11.729 ERROR: process (pid: 107878) is no longer running 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.729 00:07:11.729 real 0m1.071s 00:07:11.729 user 0m1.042s 00:07:11.729 sys 0m0.492s 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.729 11:01:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.729 ************************************ 00:07:11.729 END TEST default_locks 00:07:11.729 ************************************ 00:07:11.729 11:01:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.729 11:01:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.729 11:01:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.729 11:01:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.990 ************************************ 00:07:11.990 START TEST default_locks_via_rpc 00:07:11.990 ************************************ 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108042 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108042 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 108042 ']' 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.990 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.990 [2024-11-17 11:01:36.440466] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:11.990 [2024-11-17 11:01:36.440558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108042 ] 00:07:11.990 [2024-11-17 11:01:36.510156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.990 [2024-11-17 11:01:36.557619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108042 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108042 00:07:12.252 11:01:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108042 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 108042 ']' 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 108042 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108042 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108042' 00:07:12.513 killing process with pid 108042 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 108042 00:07:12.513 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 108042 00:07:13.082 00:07:13.082 real 0m1.122s 00:07:13.082 user 0m1.080s 00:07:13.082 sys 0m0.498s 00:07:13.082 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.082 11:01:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.082 ************************************ 00:07:13.082 END TEST default_locks_via_rpc 00:07:13.082 ************************************ 00:07:13.082 11:01:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:13.082 11:01:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.082 11:01:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.082 11:01:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.082 ************************************ 00:07:13.082 START TEST non_locking_app_on_locked_coremask 00:07:13.082 ************************************ 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108229 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108229 /var/tmp/spdk.sock 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108229 ']' 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.082 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.082 [2024-11-17 11:01:37.616575] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:13.082 [2024-11-17 11:01:37.616668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108229 ] 00:07:13.082 [2024-11-17 11:01:37.682040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.082 [2024-11-17 11:01:37.728178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.341 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.341 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:13.341 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108237 00:07:13.341 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108237 /var/tmp/spdk2.sock 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108237 ']' 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.342 11:01:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.603 [2024-11-17 11:01:38.028041] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:13.603 [2024-11-17 11:01:38.028121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108237 ] 00:07:13.603 [2024-11-17 11:01:38.122961] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.603 [2024-11-17 11:01:38.122987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.603 [2024-11-17 11:01:38.210713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.173 11:01:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.173 11:01:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:14.173 11:01:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108229 00:07:14.173 11:01:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.173 11:01:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108229 00:07:14.747 lslocks: write error 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108229 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108229 ']' 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108229 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108229 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108229' 00:07:14.747 killing process with pid 108229 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108229 00:07:14.747 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108229 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108237 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108237 ']' 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108237 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108237 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108237' 00:07:15.320 killing process with pid 108237 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108237 00:07:15.320 11:01:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108237 00:07:15.891 00:07:15.891 real 0m2.749s 00:07:15.891 user 0m2.779s 00:07:15.891 sys 0m0.962s 00:07:15.891 11:01:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.891 11:01:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.891 ************************************ 00:07:15.891 END TEST non_locking_app_on_locked_coremask 00:07:15.891 ************************************ 00:07:15.891 11:01:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:15.891 11:01:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.891 11:01:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.891 11:01:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.891 ************************************ 00:07:15.891 START TEST locking_app_on_unlocked_coremask 00:07:15.891 ************************************ 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108535 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108535 /var/tmp/spdk.sock 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108535 ']' 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.891 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.891 [2024-11-17 11:01:40.412847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:15.891 [2024-11-17 11:01:40.412942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108535 ] 00:07:15.891 [2024-11-17 11:01:40.479499] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.891 [2024-11-17 11:01:40.479550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.891 [2024-11-17 11:01:40.524170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108549 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 108549 /var/tmp/spdk2.sock 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108549 ']' 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.149 11:01:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.410 [2024-11-17 11:01:40.837764] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:16.410 [2024-11-17 11:01:40.837884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108549 ] 00:07:16.410 [2024-11-17 11:01:40.941879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.410 [2024-11-17 11:01:41.030044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.981 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.981 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:16.981 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 108549 00:07:16.981 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108549 00:07:16.981 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.552 lslocks: write error 00:07:17.552 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108535 00:07:17.552 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108535 ']' 00:07:17.552 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108535 00:07:17.552 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:17.552 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.552 11:01:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108535 00:07:17.552 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.552 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.552 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108535' 00:07:17.552 killing process with pid 108535 00:07:17.552 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108535 00:07:17.552 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108535 00:07:18.122 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 108549 00:07:18.122 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108549 ']' 00:07:18.122 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108549 00:07:18.122 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:18.122 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.122 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108549 00:07:18.382 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.382 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.382 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108549' 00:07:18.382 killing process with pid 108549 00:07:18.382 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108549 00:07:18.382 11:01:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108549 00:07:18.640 00:07:18.640 real 0m2.796s 00:07:18.640 user 0m2.836s 00:07:18.640 sys 0m0.997s 00:07:18.640 11:01:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.640 11:01:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.641 ************************************ 00:07:18.641 END TEST locking_app_on_unlocked_coremask 00:07:18.641 ************************************ 00:07:18.641 11:01:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:18.641 11:01:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.641 11:01:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.641 11:01:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.641 ************************************ 00:07:18.641 START TEST locking_app_on_locked_coremask 00:07:18.641 ************************************ 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=108962 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 108962 /var/tmp/spdk.sock 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108962 ']' 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.641 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.641 [2024-11-17 11:01:43.261381] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:18.641 [2024-11-17 11:01:43.261474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108962 ] 00:07:18.900 [2024-11-17 11:01:43.326862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.900 [2024-11-17 11:01:43.368975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=108976 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 108976 /var/tmp/spdk2.sock 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 108976 /var/tmp/spdk2.sock 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 108976 /var/tmp/spdk2.sock 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108976 ']' 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.159 11:01:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.159 [2024-11-17 11:01:43.681588] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:19.159 [2024-11-17 11:01:43.681680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108976 ] 00:07:19.159 [2024-11-17 11:01:43.780381] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 108962 has claimed it. 00:07:19.160 [2024-11-17 11:01:43.780448] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (108976) - No such process 00:07:20.103 ERROR: process (pid: 108976) is no longer running 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.103 lslocks: write error 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108962 ']' 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108962' 00:07:20.103 killing process with pid 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108962 00:07:20.103 11:01:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108962 00:07:20.671 00:07:20.671 real 0m1.881s 00:07:20.671 user 0m2.108s 00:07:20.671 sys 0m0.599s 00:07:20.671 11:01:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.671 11:01:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.671 ************************************ 00:07:20.671 END TEST locking_app_on_locked_coremask 00:07:20.671 ************************************ 00:07:20.671 11:01:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:20.671 11:01:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.671 11:01:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.671 11:01:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.671 ************************************ 00:07:20.671 START TEST locking_overlapped_coremask 00:07:20.671 ************************************ 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109143 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109143 /var/tmp/spdk.sock 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109143 ']' 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.671 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.671 [2024-11-17 11:01:45.191602] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:20.671 [2024-11-17 11:01:45.191698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109143 ] 00:07:20.671 [2024-11-17 11:01:45.255482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.671 [2024-11-17 11:01:45.298960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.671 [2024-11-17 11:01:45.299022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.671 [2024-11-17 11:01:45.299025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109263 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109263 /var/tmp/spdk2.sock 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109263 /var/tmp/spdk2.sock 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109263 /var/tmp/spdk2.sock 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109263 ']' 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.930 11:01:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.190 [2024-11-17 11:01:45.618887] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:21.190 [2024-11-17 11:01:45.618983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109263 ] 00:07:21.190 [2024-11-17 11:01:45.723063] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109143 has claimed it. 00:07:21.190 [2024-11-17 11:01:45.723123] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109263) - No such process 00:07:21.759 ERROR: process (pid: 109263) is no longer running 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109143 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109143 ']' 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109143 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109143 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109143' 00:07:21.759 killing process with pid 109143 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109143 00:07:21.759 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109143 00:07:22.333 00:07:22.333 real 0m1.616s 00:07:22.333 user 0m4.573s 00:07:22.333 sys 0m0.476s 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.333 ************************************ 00:07:22.333 END TEST locking_overlapped_coremask 00:07:22.333 ************************************ 00:07:22.333 11:01:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:22.333 11:01:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.333 11:01:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.333 11:01:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.333 ************************************ 00:07:22.333 START TEST locking_overlapped_coremask_via_rpc 00:07:22.333 ************************************ 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109435 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109435 /var/tmp/spdk.sock 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109435 ']' 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.333 11:01:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.333 [2024-11-17 11:01:46.853378] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:22.333 [2024-11-17 11:01:46.853465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109435 ] 00:07:22.333 [2024-11-17 11:01:46.923834] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.333 [2024-11-17 11:01:46.923892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.333 [2024-11-17 11:01:46.976559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.333 [2024-11-17 11:01:46.976585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.333 [2024-11-17 11:01:46.976588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109447 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109447 /var/tmp/spdk2.sock 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109447 ']' 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.593 11:01:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.853 [2024-11-17 11:01:47.295545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:22.853 [2024-11-17 11:01:47.295640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109447 ] 00:07:22.853 [2024-11-17 11:01:47.399666] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.853 [2024-11-17 11:01:47.399715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.853 [2024-11-17 11:01:47.495889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.853 [2024-11-17 11:01:47.499593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.853 [2024-11-17 11:01:47.499596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.792 [2024-11-17 11:01:48.286627] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109435 has claimed it. 00:07:23.792 request: 00:07:23.792 { 00:07:23.792 "method": "framework_enable_cpumask_locks", 00:07:23.792 "req_id": 1 00:07:23.792 } 00:07:23.792 Got JSON-RPC error response 00:07:23.792 response: 00:07:23.792 { 00:07:23.792 "code": -32603, 00:07:23.792 "message": "Failed to claim CPU core: 2" 00:07:23.792 } 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109435 /var/tmp/spdk.sock 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109435 ']' 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.792 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109447 /var/tmp/spdk2.sock 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109447 ']' 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.051 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:24.328 00:07:24.328 real 0m2.060s 00:07:24.328 user 0m1.133s 00:07:24.328 sys 0m0.189s 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.328 11:01:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.328 ************************************ 00:07:24.328 END TEST locking_overlapped_coremask_via_rpc 00:07:24.328 ************************************ 00:07:24.328 11:01:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:24.328 11:01:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109435 ]] 00:07:24.328 11:01:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109435 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109435 ']' 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109435 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109435 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109435' 00:07:24.328 killing process with pid 109435 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109435 00:07:24.328 11:01:48 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109435 00:07:24.918 11:01:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109447 ]] 00:07:24.918 11:01:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109447 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109447 ']' 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109447 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109447 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109447' 00:07:24.918 killing process with pid 109447 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109447 00:07:24.918 11:01:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109447 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109435 ]] 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109435 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109435 ']' 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109435 00:07:25.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109435) - No such process 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109435 is not found' 00:07:25.186 Process with pid 109435 is not found 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109447 ]] 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109447 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109447 ']' 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109447 00:07:25.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109447) - No such process 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109447 is not found' 00:07:25.186 Process with pid 109447 is not found 00:07:25.186 11:01:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.186 00:07:25.186 real 0m14.644s 00:07:25.186 user 0m27.118s 00:07:25.186 sys 0m5.142s 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.186 11:01:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.186 ************************************ 00:07:25.186 END TEST cpu_locks 00:07:25.186 ************************************ 00:07:25.186 00:07:25.186 real 0m39.132s 00:07:25.186 user 1m17.515s 00:07:25.186 sys 0m9.278s 00:07:25.186 11:01:49 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.186 11:01:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.186 ************************************ 00:07:25.186 END TEST event 00:07:25.186 ************************************ 00:07:25.186 11:01:49 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.186 11:01:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.186 11:01:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.186 11:01:49 -- common/autotest_common.sh@10 -- # set +x 00:07:25.186 ************************************ 00:07:25.186 START TEST thread 00:07:25.186 ************************************ 00:07:25.186 11:01:49 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:25.460 * Looking for test storage... 00:07:25.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.460 11:01:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.460 11:01:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.460 11:01:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.460 11:01:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.460 11:01:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.460 11:01:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.460 11:01:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.460 11:01:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.460 11:01:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.460 11:01:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.460 11:01:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.460 11:01:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:25.460 11:01:49 thread -- scripts/common.sh@345 -- # : 1 00:07:25.460 11:01:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.460 11:01:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.460 11:01:49 thread -- scripts/common.sh@365 -- # decimal 1 00:07:25.460 11:01:49 thread -- scripts/common.sh@353 -- # local d=1 00:07:25.460 11:01:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.460 11:01:49 thread -- scripts/common.sh@355 -- # echo 1 00:07:25.460 11:01:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.460 11:01:49 thread -- scripts/common.sh@366 -- # decimal 2 00:07:25.460 11:01:49 thread -- scripts/common.sh@353 -- # local d=2 00:07:25.460 11:01:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.460 11:01:49 thread -- scripts/common.sh@355 -- # echo 2 00:07:25.460 11:01:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.460 11:01:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.460 11:01:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.460 11:01:49 thread -- scripts/common.sh@368 -- # return 0 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.460 --rc genhtml_branch_coverage=1 00:07:25.460 --rc genhtml_function_coverage=1 00:07:25.460 --rc genhtml_legend=1 00:07:25.460 --rc geninfo_all_blocks=1 00:07:25.460 --rc geninfo_unexecuted_blocks=1 00:07:25.460 00:07:25.460 ' 00:07:25.460 11:01:49 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.460 --rc genhtml_branch_coverage=1 00:07:25.460 --rc genhtml_function_coverage=1 00:07:25.460 --rc genhtml_legend=1 00:07:25.460 --rc geninfo_all_blocks=1 00:07:25.460 --rc geninfo_unexecuted_blocks=1 00:07:25.460 00:07:25.460 ' 00:07:25.461 11:01:49 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.461 --rc genhtml_branch_coverage=1 00:07:25.461 --rc genhtml_function_coverage=1 00:07:25.461 --rc genhtml_legend=1 00:07:25.461 --rc geninfo_all_blocks=1 00:07:25.461 --rc geninfo_unexecuted_blocks=1 00:07:25.461 00:07:25.461 ' 00:07:25.461 11:01:49 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.461 --rc genhtml_branch_coverage=1 00:07:25.461 --rc genhtml_function_coverage=1 00:07:25.461 --rc genhtml_legend=1 00:07:25.461 --rc geninfo_all_blocks=1 00:07:25.461 --rc geninfo_unexecuted_blocks=1 00:07:25.461 00:07:25.461 ' 00:07:25.461 11:01:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.461 11:01:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:25.461 11:01:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.461 11:01:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.461 ************************************ 00:07:25.461 START TEST thread_poller_perf 00:07:25.461 ************************************ 00:07:25.461 11:01:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.461 [2024-11-17 11:01:49.986951] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:25.461 [2024-11-17 11:01:49.987019] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109944 ] 00:07:25.461 [2024-11-17 11:01:50.054111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.461 [2024-11-17 11:01:50.098803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.461 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:26.912 [2024-11-17T10:01:51.570Z] ====================================== 00:07:26.912 [2024-11-17T10:01:51.570Z] busy:2706591774 (cyc) 00:07:26.912 [2024-11-17T10:01:51.570Z] total_run_count: 352000 00:07:26.912 [2024-11-17T10:01:51.570Z] tsc_hz: 2700000000 (cyc) 00:07:26.912 [2024-11-17T10:01:51.570Z] ====================================== 00:07:26.912 [2024-11-17T10:01:51.570Z] poller_cost: 7689 (cyc), 2847 (nsec) 00:07:26.912 00:07:26.912 real 0m1.175s 00:07:26.912 user 0m1.103s 00:07:26.912 sys 0m0.067s 00:07:26.912 11:01:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.912 11:01:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.912 ************************************ 00:07:26.912 END TEST thread_poller_perf 00:07:26.912 ************************************ 00:07:26.912 11:01:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.912 11:01:51 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:26.912 11:01:51 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.912 11:01:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.912 ************************************ 00:07:26.912 START TEST thread_poller_perf 00:07:26.912 ************************************ 00:07:26.912 11:01:51 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:26.912 [2024-11-17 11:01:51.213471] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:26.912 [2024-11-17 11:01:51.213565] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110100 ] 00:07:26.912 [2024-11-17 11:01:51.278996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.912 [2024-11-17 11:01:51.323920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.912 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.900 [2024-11-17T10:01:52.558Z] ====================================== 00:07:27.900 [2024-11-17T10:01:52.558Z] busy:2701935417 (cyc) 00:07:27.900 [2024-11-17T10:01:52.558Z] total_run_count: 4656000 00:07:27.900 [2024-11-17T10:01:52.558Z] tsc_hz: 2700000000 (cyc) 00:07:27.900 [2024-11-17T10:01:52.558Z] ====================================== 00:07:27.900 [2024-11-17T10:01:52.558Z] poller_cost: 580 (cyc), 214 (nsec) 00:07:27.900 00:07:27.900 real 0m1.168s 00:07:27.900 user 0m1.102s 00:07:27.900 sys 0m0.061s 00:07:27.900 11:01:52 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.900 11:01:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.900 ************************************ 00:07:27.900 END TEST thread_poller_perf 00:07:27.900 ************************************ 00:07:27.900 11:01:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:27.900 00:07:27.900 real 0m2.585s 00:07:27.900 user 0m2.333s 00:07:27.900 sys 0m0.256s 00:07:27.900 11:01:52 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.900 11:01:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.900 ************************************ 00:07:27.900 END TEST thread 00:07:27.900 ************************************ 00:07:27.900 11:01:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:27.900 11:01:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.900 11:01:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.901 11:01:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.901 11:01:52 -- common/autotest_common.sh@10 -- # set +x 00:07:27.901 ************************************ 00:07:27.901 START TEST app_cmdline 00:07:27.901 ************************************ 00:07:27.901 11:01:52 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.901 * Looking for test storage... 00:07:27.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.901 11:01:52 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.901 11:01:52 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.901 11:01:52 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:28.185 11:01:52 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.185 11:01:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:28.185 11:01:52 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.185 11:01:52 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:28.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.185 --rc genhtml_branch_coverage=1 00:07:28.185 --rc genhtml_function_coverage=1 00:07:28.185 --rc genhtml_legend=1 00:07:28.185 --rc geninfo_all_blocks=1 00:07:28.185 --rc geninfo_unexecuted_blocks=1 00:07:28.185 00:07:28.185 ' 00:07:28.185 11:01:52 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:28.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.185 --rc genhtml_branch_coverage=1 00:07:28.185 --rc genhtml_function_coverage=1 00:07:28.185 --rc genhtml_legend=1 00:07:28.185 --rc geninfo_all_blocks=1 00:07:28.185 --rc geninfo_unexecuted_blocks=1 00:07:28.185 00:07:28.185 ' 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.186 --rc genhtml_branch_coverage=1 00:07:28.186 --rc genhtml_function_coverage=1 00:07:28.186 --rc genhtml_legend=1 00:07:28.186 --rc geninfo_all_blocks=1 00:07:28.186 --rc geninfo_unexecuted_blocks=1 00:07:28.186 00:07:28.186 ' 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.186 --rc genhtml_branch_coverage=1 00:07:28.186 --rc genhtml_function_coverage=1 00:07:28.186 --rc genhtml_legend=1 00:07:28.186 --rc geninfo_all_blocks=1 00:07:28.186 --rc geninfo_unexecuted_blocks=1 00:07:28.186 00:07:28.186 ' 00:07:28.186 11:01:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:28.186 11:01:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110309 00:07:28.186 11:01:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:28.186 11:01:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110309 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 110309 ']' 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.186 11:01:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.186 [2024-11-17 11:01:52.640201] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:28.186 [2024-11-17 11:01:52.640289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110309 ] 00:07:28.186 [2024-11-17 11:01:52.707308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.186 [2024-11-17 11:01:52.753948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.470 11:01:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.470 11:01:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:28.470 11:01:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.751 { 00:07:28.751 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:28.751 "fields": { 00:07:28.751 "major": 25, 00:07:28.751 "minor": 1, 00:07:28.751 "patch": 0, 00:07:28.751 "suffix": "-pre", 00:07:28.751 "commit": "83e8405e4" 00:07:28.751 } 00:07:28.751 } 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.751 11:01:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.751 11:01:53 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:29.033 request: 00:07:29.033 { 00:07:29.033 "method": "env_dpdk_get_mem_stats", 00:07:29.033 "req_id": 1 00:07:29.033 } 00:07:29.033 Got JSON-RPC error response 00:07:29.033 response: 00:07:29.033 { 00:07:29.033 "code": -32601, 00:07:29.033 "message": "Method not found" 00:07:29.033 } 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.033 11:01:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110309 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 110309 ']' 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 110309 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110309 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110309' 00:07:29.033 killing process with pid 110309 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 110309 00:07:29.033 11:01:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 110309 00:07:29.657 00:07:29.657 real 0m1.541s 00:07:29.657 user 0m1.925s 00:07:29.657 sys 0m0.475s 00:07:29.657 11:01:53 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.657 11:01:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.657 ************************************ 00:07:29.657 END TEST app_cmdline 00:07:29.657 ************************************ 00:07:29.657 11:01:54 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.657 11:01:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.657 11:01:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.657 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:07:29.657 ************************************ 00:07:29.657 START TEST version 00:07:29.657 ************************************ 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:29.657 * Looking for test storage... 00:07:29.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.657 11:01:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.657 11:01:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.657 11:01:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.657 11:01:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.657 11:01:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.657 11:01:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.657 11:01:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.657 11:01:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.657 11:01:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.657 11:01:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.657 11:01:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.657 11:01:54 version -- scripts/common.sh@344 -- # case "$op" in 00:07:29.657 11:01:54 version -- scripts/common.sh@345 -- # : 1 00:07:29.657 11:01:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.657 11:01:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.657 11:01:54 version -- scripts/common.sh@365 -- # decimal 1 00:07:29.657 11:01:54 version -- scripts/common.sh@353 -- # local d=1 00:07:29.657 11:01:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.657 11:01:54 version -- scripts/common.sh@355 -- # echo 1 00:07:29.657 11:01:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.657 11:01:54 version -- scripts/common.sh@366 -- # decimal 2 00:07:29.657 11:01:54 version -- scripts/common.sh@353 -- # local d=2 00:07:29.657 11:01:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.657 11:01:54 version -- scripts/common.sh@355 -- # echo 2 00:07:29.657 11:01:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.657 11:01:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.657 11:01:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.657 11:01:54 version -- scripts/common.sh@368 -- # return 0 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.657 --rc genhtml_branch_coverage=1 00:07:29.657 --rc genhtml_function_coverage=1 00:07:29.657 --rc genhtml_legend=1 00:07:29.657 --rc geninfo_all_blocks=1 00:07:29.657 --rc geninfo_unexecuted_blocks=1 00:07:29.657 00:07:29.657 ' 00:07:29.657 11:01:54 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.658 --rc genhtml_branch_coverage=1 00:07:29.658 --rc genhtml_function_coverage=1 00:07:29.658 --rc genhtml_legend=1 00:07:29.658 --rc geninfo_all_blocks=1 00:07:29.658 --rc geninfo_unexecuted_blocks=1 00:07:29.658 00:07:29.658 ' 00:07:29.658 11:01:54 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.658 --rc genhtml_branch_coverage=1 00:07:29.658 --rc genhtml_function_coverage=1 00:07:29.658 --rc genhtml_legend=1 00:07:29.658 --rc geninfo_all_blocks=1 00:07:29.658 --rc geninfo_unexecuted_blocks=1 00:07:29.658 00:07:29.658 ' 00:07:29.658 11:01:54 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.658 --rc genhtml_branch_coverage=1 00:07:29.658 --rc genhtml_function_coverage=1 00:07:29.658 --rc genhtml_legend=1 00:07:29.658 --rc geninfo_all_blocks=1 00:07:29.658 --rc geninfo_unexecuted_blocks=1 00:07:29.658 00:07:29.658 ' 00:07:29.658 11:01:54 version -- app/version.sh@17 -- # get_header_version major 00:07:29.658 11:01:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # cut -f2 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.658 11:01:54 version -- app/version.sh@17 -- # major=25 00:07:29.658 11:01:54 version -- app/version.sh@18 -- # get_header_version minor 00:07:29.658 11:01:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # cut -f2 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.658 11:01:54 version -- app/version.sh@18 -- # minor=1 00:07:29.658 11:01:54 version -- app/version.sh@19 -- # get_header_version patch 00:07:29.658 11:01:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # cut -f2 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.658 11:01:54 version -- app/version.sh@19 -- # patch=0 00:07:29.658 11:01:54 version -- app/version.sh@20 -- # get_header_version suffix 00:07:29.658 11:01:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # cut -f2 00:07:29.658 11:01:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:29.658 11:01:54 version -- app/version.sh@20 -- # suffix=-pre 00:07:29.658 11:01:54 version -- app/version.sh@22 -- # version=25.1 00:07:29.658 11:01:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:29.658 11:01:54 version -- app/version.sh@28 -- # version=25.1rc0 00:07:29.658 11:01:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.658 11:01:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:29.658 11:01:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:29.658 11:01:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:29.658 00:07:29.658 real 0m0.202s 00:07:29.658 user 0m0.141s 00:07:29.658 sys 0m0.088s 00:07:29.658 11:01:54 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.658 11:01:54 version -- common/autotest_common.sh@10 -- # set +x 00:07:29.658 ************************************ 00:07:29.658 END TEST version 00:07:29.658 ************************************ 00:07:29.658 11:01:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:29.658 11:01:54 -- spdk/autotest.sh@194 -- # uname -s 00:07:29.658 11:01:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:29.658 11:01:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.658 11:01:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:29.658 11:01:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:29.658 11:01:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.658 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:07:29.658 11:01:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:29.658 11:01:54 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:29.658 11:01:54 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.658 11:01:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.658 11:01:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.658 11:01:54 -- common/autotest_common.sh@10 -- # set +x 00:07:29.658 ************************************ 00:07:29.658 START TEST nvmf_tcp 00:07:29.658 ************************************ 00:07:29.658 11:01:54 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.932 * Looking for test storage... 00:07:29.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.932 11:01:54 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.932 11:01:54 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.932 11:01:54 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.932 11:01:54 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:29.932 11:01:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.933 11:01:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.933 --rc genhtml_branch_coverage=1 00:07:29.933 --rc genhtml_function_coverage=1 00:07:29.933 --rc genhtml_legend=1 00:07:29.933 --rc geninfo_all_blocks=1 00:07:29.933 --rc geninfo_unexecuted_blocks=1 00:07:29.933 00:07:29.933 ' 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.933 --rc genhtml_branch_coverage=1 00:07:29.933 --rc genhtml_function_coverage=1 00:07:29.933 --rc genhtml_legend=1 00:07:29.933 --rc geninfo_all_blocks=1 00:07:29.933 --rc geninfo_unexecuted_blocks=1 00:07:29.933 00:07:29.933 ' 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.933 --rc genhtml_branch_coverage=1 00:07:29.933 --rc genhtml_function_coverage=1 00:07:29.933 --rc genhtml_legend=1 00:07:29.933 --rc geninfo_all_blocks=1 00:07:29.933 --rc geninfo_unexecuted_blocks=1 00:07:29.933 00:07:29.933 ' 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.933 --rc genhtml_branch_coverage=1 00:07:29.933 --rc genhtml_function_coverage=1 00:07:29.933 --rc genhtml_legend=1 00:07:29.933 --rc geninfo_all_blocks=1 00:07:29.933 --rc geninfo_unexecuted_blocks=1 00:07:29.933 00:07:29.933 ' 00:07:29.933 11:01:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.933 11:01:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.933 11:01:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.933 11:01:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 ************************************ 00:07:29.933 START TEST nvmf_target_core 00:07:29.933 ************************************ 00:07:29.933 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.933 * Looking for test storage... 00:07:29.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:29.933 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.933 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.933 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.215 --rc genhtml_branch_coverage=1 00:07:30.215 --rc genhtml_function_coverage=1 00:07:30.215 --rc genhtml_legend=1 00:07:30.215 --rc geninfo_all_blocks=1 00:07:30.215 --rc geninfo_unexecuted_blocks=1 00:07:30.215 00:07:30.215 ' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.215 --rc genhtml_branch_coverage=1 00:07:30.215 --rc genhtml_function_coverage=1 00:07:30.215 --rc genhtml_legend=1 00:07:30.215 --rc geninfo_all_blocks=1 00:07:30.215 --rc geninfo_unexecuted_blocks=1 00:07:30.215 00:07:30.215 ' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.215 --rc genhtml_branch_coverage=1 00:07:30.215 --rc genhtml_function_coverage=1 00:07:30.215 --rc genhtml_legend=1 00:07:30.215 --rc geninfo_all_blocks=1 00:07:30.215 --rc geninfo_unexecuted_blocks=1 00:07:30.215 00:07:30.215 ' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.215 --rc genhtml_branch_coverage=1 00:07:30.215 --rc genhtml_function_coverage=1 00:07:30.215 --rc genhtml_legend=1 00:07:30.215 --rc geninfo_all_blocks=1 00:07:30.215 --rc geninfo_unexecuted_blocks=1 00:07:30.215 00:07:30.215 ' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:30.215 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.216 ************************************ 00:07:30.216 START TEST nvmf_abort 00:07:30.216 ************************************ 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.216 * Looking for test storage... 00:07:30.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.216 --rc genhtml_branch_coverage=1 00:07:30.216 --rc genhtml_function_coverage=1 00:07:30.216 --rc genhtml_legend=1 00:07:30.216 --rc geninfo_all_blocks=1 00:07:30.216 --rc geninfo_unexecuted_blocks=1 00:07:30.216 00:07:30.216 ' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.216 --rc genhtml_branch_coverage=1 00:07:30.216 --rc genhtml_function_coverage=1 00:07:30.216 --rc genhtml_legend=1 00:07:30.216 --rc geninfo_all_blocks=1 00:07:30.216 --rc geninfo_unexecuted_blocks=1 00:07:30.216 00:07:30.216 ' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.216 --rc genhtml_branch_coverage=1 00:07:30.216 --rc genhtml_function_coverage=1 00:07:30.216 --rc genhtml_legend=1 00:07:30.216 --rc geninfo_all_blocks=1 00:07:30.216 --rc geninfo_unexecuted_blocks=1 00:07:30.216 00:07:30.216 ' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.216 --rc genhtml_branch_coverage=1 00:07:30.216 --rc genhtml_function_coverage=1 00:07:30.216 --rc genhtml_legend=1 00:07:30.216 --rc geninfo_all_blocks=1 00:07:30.216 --rc geninfo_unexecuted_blocks=1 00:07:30.216 00:07:30.216 ' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.216 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.217 11:01:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.826 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.826 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.826 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.826 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.827 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.827 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.827 11:01:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.827 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:07:32.827 00:07:32.827 --- 10.0.0.2 ping statistics --- 00:07:32.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.828 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:07:32.828 00:07:32.828 --- 10.0.0.1 ping statistics --- 00:07:32.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.828 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=112419 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 112419 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 112419 ']' 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.828 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:32.828 [2024-11-17 11:01:57.211038] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:32.828 [2024-11-17 11:01:57.211126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.828 [2024-11-17 11:01:57.283658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.828 [2024-11-17 11:01:57.332001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.828 [2024-11-17 11:01:57.332067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.828 [2024-11-17 11:01:57.332080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.828 [2024-11-17 11:01:57.332091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.828 [2024-11-17 11:01:57.332100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.828 [2024-11-17 11:01:57.333560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.828 [2024-11-17 11:01:57.335558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.828 [2024-11-17 11:01:57.335582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 [2024-11-17 11:01:57.521912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 Malloc0 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 Delay0 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 [2024-11-17 11:01:57.589578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.088 11:01:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:33.088 [2024-11-17 11:01:57.705369] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:35.636 Initializing NVMe Controllers 00:07:35.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:35.636 controller IO queue size 128 less than required 00:07:35.636 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:35.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:35.636 Initialization complete. Launching workers. 00:07:35.636 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28457 00:07:35.636 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28518, failed to submit 62 00:07:35.636 success 28461, unsuccessful 57, failed 0 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.636 rmmod nvme_tcp 00:07:35.636 rmmod nvme_fabrics 00:07:35.636 rmmod nvme_keyring 00:07:35.636 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 112419 ']' 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 112419 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 112419 ']' 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 112419 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112419 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112419' 00:07:35.637 killing process with pid 112419 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 112419 00:07:35.637 11:01:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 112419 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.637 11:02:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:37.542 00:07:37.542 real 0m7.492s 00:07:37.542 user 0m10.946s 00:07:37.542 sys 0m2.489s 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:37.542 ************************************ 00:07:37.542 END TEST nvmf_abort 00:07:37.542 ************************************ 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.542 11:02:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:37.802 ************************************ 00:07:37.802 START TEST nvmf_ns_hotplug_stress 00:07:37.802 ************************************ 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:37.802 * Looking for test storage... 00:07:37.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.802 --rc genhtml_branch_coverage=1 00:07:37.802 --rc genhtml_function_coverage=1 00:07:37.802 --rc genhtml_legend=1 00:07:37.802 --rc geninfo_all_blocks=1 00:07:37.802 --rc geninfo_unexecuted_blocks=1 00:07:37.802 00:07:37.802 ' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.802 --rc genhtml_branch_coverage=1 00:07:37.802 --rc genhtml_function_coverage=1 00:07:37.802 --rc genhtml_legend=1 00:07:37.802 --rc geninfo_all_blocks=1 00:07:37.802 --rc geninfo_unexecuted_blocks=1 00:07:37.802 00:07:37.802 ' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.802 --rc genhtml_branch_coverage=1 00:07:37.802 --rc genhtml_function_coverage=1 00:07:37.802 --rc genhtml_legend=1 00:07:37.802 --rc geninfo_all_blocks=1 00:07:37.802 --rc geninfo_unexecuted_blocks=1 00:07:37.802 00:07:37.802 ' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.802 --rc genhtml_branch_coverage=1 00:07:37.802 --rc genhtml_function_coverage=1 00:07:37.802 --rc genhtml_legend=1 00:07:37.802 --rc geninfo_all_blocks=1 00:07:37.802 --rc geninfo_unexecuted_blocks=1 00:07:37.802 00:07:37.802 ' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.802 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:37.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:37.803 11:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:40.345 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:40.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:40.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:40.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:40.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:40.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:07:40.346 00:07:40.346 --- 10.0.0.2 ping statistics --- 00:07:40.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.346 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:40.346 00:07:40.346 --- 10.0.0.1 ping statistics --- 00:07:40.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.346 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.346 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=114777 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 114777 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 114777 ']' 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.347 11:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.347 [2024-11-17 11:02:04.783556] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:07:40.347 [2024-11-17 11:02:04.783658] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.347 [2024-11-17 11:02:04.854995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.347 [2024-11-17 11:02:04.901155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.347 [2024-11-17 11:02:04.901205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.347 [2024-11-17 11:02:04.901229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.347 [2024-11-17 11:02:04.901239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.347 [2024-11-17 11:02:04.901248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.347 [2024-11-17 11:02:04.902644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.347 [2024-11-17 11:02:04.902701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.347 [2024-11-17 11:02:04.902704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:40.606 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:40.864 [2024-11-17 11:02:05.283451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.864 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.122 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.390 [2024-11-17 11:02:05.826352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.390 11:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.648 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:41.908 Malloc0 00:07:41.908 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:42.167 Delay0 00:07:42.167 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.426 11:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:42.685 NULL1 00:07:42.685 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:42.962 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115086 00:07:42.962 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:42.962 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:42.962 11:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.346 Read completed with error (sct=0, sc=11) 00:07:44.346 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.346 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:44.346 11:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:44.605 true 00:07:44.605 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:44.605 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.544 11:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.804 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:45.804 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:46.064 true 00:07:46.064 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:46.064 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.344 11:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.604 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:46.604 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:46.863 true 00:07:46.863 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:46.863 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.122 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.383 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:47.383 11:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:47.642 true 00:07:47.642 11:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:47.642 11:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.582 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.842 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:48.842 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:48.842 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:49.102 true 00:07:49.102 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:49.102 11:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.363 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.623 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:49.623 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:49.886 true 00:07:50.147 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:50.147 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.407 11:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.667 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:50.667 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:50.927 true 00:07:50.927 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:50.927 11:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.865 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.123 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:52.124 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:52.383 true 00:07:52.383 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:52.383 11:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.642 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.900 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:52.900 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:53.158 true 00:07:53.158 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:53.158 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.417 11:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.676 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:53.676 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:53.934 true 00:07:53.934 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:53.934 11:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.872 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:55.131 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:55.131 11:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:55.390 true 00:07:55.390 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:55.390 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.957 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.957 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:55.957 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:56.215 true 00:07:56.215 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:56.215 11:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.152 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.410 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:57.410 11:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:57.669 true 00:07:57.669 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:57.669 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.928 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.186 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:58.186 11:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:58.453 true 00:07:58.453 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:58.453 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.715 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.974 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:58.974 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:59.234 true 00:07:59.234 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:07:59.234 11:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.176 11:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.435 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:00.435 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:00.694 true 00:08:00.694 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:00.694 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.954 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.213 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:01.213 11:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:01.475 true 00:08:01.736 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:01.736 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.997 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.257 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:02.257 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:02.516 true 00:08:02.516 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:02.516 11:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.456 11:02:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.714 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:03.714 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:03.974 true 00:08:03.974 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:03.974 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.232 11:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.491 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:04.492 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:04.750 true 00:08:04.750 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:04.750 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.009 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.268 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:05.268 11:02:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:05.526 true 00:08:05.526 11:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:05.526 11:02:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.909 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.909 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:06.909 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:07.169 true 00:08:07.169 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:07.169 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.428 11:02:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.687 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:07.687 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:07.947 true 00:08:07.947 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:07.947 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.206 11:02:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.465 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:08.465 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:08.724 true 00:08:08.724 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:08.724 11:02:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.663 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.923 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:09.923 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:10.181 true 00:08:10.181 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:10.181 11:02:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.440 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.698 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:10.698 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:10.957 true 00:08:10.957 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:10.957 11:02:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.896 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.155 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:12.155 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:12.413 true 00:08:12.413 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:12.413 11:02:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.672 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.930 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:12.930 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:13.188 Initializing NVMe Controllers 00:08:13.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:13.188 Controller IO queue size 128, less than required. 00:08:13.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.188 Controller IO queue size 128, less than required. 00:08:13.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:13.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:13.188 Initialization complete. Launching workers. 00:08:13.188 ======================================================== 00:08:13.188 Latency(us) 00:08:13.188 Device Information : IOPS MiB/s Average min max 00:08:13.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 496.56 0.24 105177.54 3472.87 1039386.40 00:08:13.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8529.61 4.16 15007.33 3371.63 539275.18 00:08:13.188 ======================================================== 00:08:13.188 Total : 9026.17 4.41 19967.92 3371.63 1039386.40 00:08:13.188 00:08:13.188 true 00:08:13.188 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115086 00:08:13.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115086) - No such process 00:08:13.188 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115086 00:08:13.188 11:02:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.445 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.705 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:13.705 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:13.705 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:13.705 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:13.705 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:13.964 null0 00:08:14.224 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.224 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.224 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:14.224 null1 00:08:14.224 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.224 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.224 11:02:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:14.485 null2 00:08:14.485 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:14.485 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:14.485 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:15.050 null3 00:08:15.050 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.050 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.050 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:15.050 null4 00:08:15.050 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.050 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.050 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:15.309 null5 00:08:15.309 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.309 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.309 11:02:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:15.569 null6 00:08:15.830 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:15.830 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.830 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:16.090 null7 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.090 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 119151 119152 119154 119156 119159 119161 119163 119165 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.091 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.350 11:02:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:16.610 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:16.869 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.130 11:02:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.390 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.960 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.961 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.220 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.220 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.220 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.221 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.221 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.221 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.221 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.480 11:02:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.741 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.000 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.000 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.000 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.000 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.000 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.001 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.260 11:02:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.519 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.520 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.779 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.038 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.298 11:02:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.557 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.817 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.077 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.338 11:02:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.598 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.169 rmmod nvme_tcp 00:08:22.169 rmmod nvme_fabrics 00:08:22.169 rmmod nvme_keyring 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 114777 ']' 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 114777 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 114777 ']' 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 114777 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:22.169 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.170 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114777 00:08:22.170 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:22.170 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:22.170 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114777' 00:08:22.170 killing process with pid 114777 00:08:22.170 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 114777 00:08:22.170 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 114777 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.432 11:02:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.345 00:08:24.345 real 0m46.698s 00:08:24.345 user 3m37.386s 00:08:24.345 sys 0m15.826s 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:24.345 ************************************ 00:08:24.345 END TEST nvmf_ns_hotplug_stress 00:08:24.345 ************************************ 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.345 ************************************ 00:08:24.345 START TEST nvmf_delete_subsystem 00:08:24.345 ************************************ 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:24.345 * Looking for test storage... 00:08:24.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.345 11:02:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.605 --rc genhtml_branch_coverage=1 00:08:24.605 --rc genhtml_function_coverage=1 00:08:24.605 --rc genhtml_legend=1 00:08:24.605 --rc geninfo_all_blocks=1 00:08:24.605 --rc geninfo_unexecuted_blocks=1 00:08:24.605 00:08:24.605 ' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.605 --rc genhtml_branch_coverage=1 00:08:24.605 --rc genhtml_function_coverage=1 00:08:24.605 --rc genhtml_legend=1 00:08:24.605 --rc geninfo_all_blocks=1 00:08:24.605 --rc geninfo_unexecuted_blocks=1 00:08:24.605 00:08:24.605 ' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.605 --rc genhtml_branch_coverage=1 00:08:24.605 --rc genhtml_function_coverage=1 00:08:24.605 --rc genhtml_legend=1 00:08:24.605 --rc geninfo_all_blocks=1 00:08:24.605 --rc geninfo_unexecuted_blocks=1 00:08:24.605 00:08:24.605 ' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.605 --rc genhtml_branch_coverage=1 00:08:24.605 --rc genhtml_function_coverage=1 00:08:24.605 --rc genhtml_legend=1 00:08:24.605 --rc geninfo_all_blocks=1 00:08:24.605 --rc geninfo_unexecuted_blocks=1 00:08:24.605 00:08:24.605 ' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.605 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.606 11:02:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:27.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:27.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:27.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:27.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.146 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:08:27.147 00:08:27.147 --- 10.0.0.2 ping statistics --- 00:08:27.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.147 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:27.147 00:08:27.147 --- 10.0.0.1 ping statistics --- 00:08:27.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.147 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=121950 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 121950 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 121950 ']' 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.147 [2024-11-17 11:02:51.545969] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:27.147 [2024-11-17 11:02:51.546057] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.147 [2024-11-17 11:02:51.618265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.147 [2024-11-17 11:02:51.662235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.147 [2024-11-17 11:02:51.662315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.147 [2024-11-17 11:02:51.662338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.147 [2024-11-17 11:02:51.662348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.147 [2024-11-17 11:02:51.662357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.147 [2024-11-17 11:02:51.663670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.147 [2024-11-17 11:02:51.663675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.147 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 [2024-11-17 11:02:51.809311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 [2024-11-17 11:02:51.825559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 NULL1 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 Delay0 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122087 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:27.408 11:02:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:27.408 [2024-11-17 11:02:51.910331] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:29.319 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.319 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.319 11:02:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 starting I/O failed: -6 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 [2024-11-17 11:02:54.032018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f14b800d680 is same with the state(6) to be set 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Write completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.579 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 starting I/O failed: -6 00:08:29.580 Read completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 Write completed with error (sct=0, sc=8) 00:08:29.580 [2024-11-17 11:02:54.033171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeffb40 is same with the state(6) to be set 00:08:30.523 [2024-11-17 11:02:55.006574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0d5b0 is same with the state(6) to be set 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 [2024-11-17 11:02:55.034576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f14b800d350 is same with the state(6) to be set 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 [2024-11-17 11:02:55.035252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeff810 is same with the state(6) to be set 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 [2024-11-17 11:02:55.035515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeff3f0 is same with the state(6) to be set 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 Write completed with error (sct=0, sc=8) 00:08:30.523 Read completed with error (sct=0, sc=8) 00:08:30.523 [2024-11-17 11:02:55.035808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeffe70 is same with the state(6) to be set 00:08:30.523 Initializing NVMe Controllers 00:08:30.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:30.523 Controller IO queue size 128, less than required. 00:08:30.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:30.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:30.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:30.523 Initialization complete. Launching workers. 00:08:30.523 ======================================================== 00:08:30.523 Latency(us) 00:08:30.523 Device Information : IOPS MiB/s Average min max 00:08:30.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.63 0.09 1044515.02 1495.06 2003033.84 00:08:30.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.85 0.07 900198.96 408.40 1012017.69 00:08:30.523 ======================================================== 00:08:30.524 Total : 327.48 0.16 978479.49 408.40 2003033.84 00:08:30.524 00:08:30.524 [2024-11-17 11:02:55.036489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0d5b0 (9): Bad file descriptor 00:08:30.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:30.524 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.524 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:30.524 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122087 00:08:30.524 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122087 00:08:31.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122087) - No such process 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122087 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122087 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122087 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.094 [2024-11-17 11:02:55.561410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=122500 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:31.094 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.095 11:02:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:31.095 [2024-11-17 11:02:55.633424] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:31.664 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.664 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:31.664 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.232 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.232 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:32.232 11:02:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.489 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.489 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:32.489 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.055 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.055 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:33.055 11:02:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.619 11:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.619 11:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:33.619 11:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.185 11:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.185 11:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:34.185 11:02:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:34.185 Initializing NVMe Controllers 00:08:34.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:34.185 Controller IO queue size 128, less than required. 00:08:34.185 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:34.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:34.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:34.185 Initialization complete. Launching workers. 00:08:34.185 ======================================================== 00:08:34.185 Latency(us) 00:08:34.185 Device Information : IOPS MiB/s Average min max 00:08:34.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003868.80 1000144.89 1011544.67 00:08:34.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005198.22 1000156.08 1040684.08 00:08:34.185 ======================================================== 00:08:34.185 Total : 256.00 0.12 1004533.51 1000144.89 1040684.08 00:08:34.185 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122500 00:08:34.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (122500) - No such process 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 122500 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.443 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.701 rmmod nvme_tcp 00:08:34.701 rmmod nvme_fabrics 00:08:34.701 rmmod nvme_keyring 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 121950 ']' 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 121950 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 121950 ']' 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 121950 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 121950 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 121950' 00:08:34.701 killing process with pid 121950 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 121950 00:08:34.701 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 121950 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.959 11:02:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.865 00:08:36.865 real 0m12.468s 00:08:36.865 user 0m27.850s 00:08:36.865 sys 0m3.039s 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.865 ************************************ 00:08:36.865 END TEST nvmf_delete_subsystem 00:08:36.865 ************************************ 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.865 ************************************ 00:08:36.865 START TEST nvmf_host_management 00:08:36.865 ************************************ 00:08:36.865 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:36.865 * Looking for test storage... 00:08:36.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:37.124 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.125 --rc genhtml_branch_coverage=1 00:08:37.125 --rc genhtml_function_coverage=1 00:08:37.125 --rc genhtml_legend=1 00:08:37.125 --rc geninfo_all_blocks=1 00:08:37.125 --rc geninfo_unexecuted_blocks=1 00:08:37.125 00:08:37.125 ' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.125 --rc genhtml_branch_coverage=1 00:08:37.125 --rc genhtml_function_coverage=1 00:08:37.125 --rc genhtml_legend=1 00:08:37.125 --rc geninfo_all_blocks=1 00:08:37.125 --rc geninfo_unexecuted_blocks=1 00:08:37.125 00:08:37.125 ' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.125 --rc genhtml_branch_coverage=1 00:08:37.125 --rc genhtml_function_coverage=1 00:08:37.125 --rc genhtml_legend=1 00:08:37.125 --rc geninfo_all_blocks=1 00:08:37.125 --rc geninfo_unexecuted_blocks=1 00:08:37.125 00:08:37.125 ' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.125 --rc genhtml_branch_coverage=1 00:08:37.125 --rc genhtml_function_coverage=1 00:08:37.125 --rc genhtml_legend=1 00:08:37.125 --rc geninfo_all_blocks=1 00:08:37.125 --rc geninfo_unexecuted_blocks=1 00:08:37.125 00:08:37.125 ' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.125 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.126 11:03:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.661 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.662 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.662 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.662 11:03:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:08:39.662 00:08:39.662 --- 10.0.0.2 ping statistics --- 00:08:39.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.662 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:39.662 00:08:39.662 --- 10.0.0.1 ping statistics --- 00:08:39.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.662 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.662 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=124978 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 124978 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 124978 ']' 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.663 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.663 [2024-11-17 11:03:04.087477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:39.663 [2024-11-17 11:03:04.087611] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.663 [2024-11-17 11:03:04.160417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.663 [2024-11-17 11:03:04.210536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.663 [2024-11-17 11:03:04.210608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.663 [2024-11-17 11:03:04.210638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.663 [2024-11-17 11:03:04.210650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.663 [2024-11-17 11:03:04.210660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.663 [2024-11-17 11:03:04.212406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.663 [2024-11-17 11:03:04.212464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.663 [2024-11-17 11:03:04.212538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.663 [2024-11-17 11:03:04.212544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.921 [2024-11-17 11:03:04.363416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.921 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 Malloc0 00:08:39.922 [2024-11-17 11:03:04.444291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=125137 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 125137 /var/tmp/bdevperf.sock 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125137 ']' 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.922 { 00:08:39.922 "params": { 00:08:39.922 "name": "Nvme$subsystem", 00:08:39.922 "trtype": "$TEST_TRANSPORT", 00:08:39.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.922 "adrfam": "ipv4", 00:08:39.922 "trsvcid": "$NVMF_PORT", 00:08:39.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.922 "hdgst": ${hdgst:-false}, 00:08:39.922 "ddgst": ${ddgst:-false} 00:08:39.922 }, 00:08:39.922 "method": "bdev_nvme_attach_controller" 00:08:39.922 } 00:08:39.922 EOF 00:08:39.922 )") 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.922 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.922 "params": { 00:08:39.922 "name": "Nvme0", 00:08:39.922 "trtype": "tcp", 00:08:39.922 "traddr": "10.0.0.2", 00:08:39.922 "adrfam": "ipv4", 00:08:39.922 "trsvcid": "4420", 00:08:39.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.922 "hdgst": false, 00:08:39.922 "ddgst": false 00:08:39.922 }, 00:08:39.922 "method": "bdev_nvme_attach_controller" 00:08:39.922 }' 00:08:39.922 [2024-11-17 11:03:04.525272] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:39.922 [2024-11-17 11:03:04.525351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125137 ] 00:08:40.179 [2024-11-17 11:03:04.596956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.179 [2024-11-17 11:03:04.644104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.438 Running I/O for 10 seconds... 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:40.438 11:03:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.698 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.698 [2024-11-17 11:03:05.259160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.698 [2024-11-17 11:03:05.259531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.698 [2024-11-17 11:03:05.259547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.259984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.259998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.699 [2024-11-17 11:03:05.260719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.699 [2024-11-17 11:03:05.260732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.260975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.260991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.700 [2024-11-17 11:03:05.261186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.700 [2024-11-17 11:03:05.261201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2173090 is same with the state(6) to be set 00:08:40.700 [2024-11-17 11:03:05.262461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:40.700 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.700 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.700 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.700 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.700 task offset: 89472 on job bdev=Nvme0n1 fails 00:08:40.700 00:08:40.700 Latency(us) 00:08:40.700 [2024-11-17T10:03:05.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.700 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.700 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:40.700 Verification LBA range: start 0x0 length 0x400 00:08:40.700 Nvme0n1 : 0.40 1600.04 100.00 160.00 0.00 35311.82 2694.26 34758.35 00:08:40.700 [2024-11-17T10:03:05.358Z] =================================================================================================================== 00:08:40.700 [2024-11-17T10:03:05.358Z] Total : 1600.04 100.00 160.00 0.00 35311.82 2694.26 34758.35 00:08:40.700 [2024-11-17 11:03:05.264371] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.700 [2024-11-17 11:03:05.264401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2176d70 (9): Bad file descriptor 00:08:40.700 [2024-11-17 11:03:05.269346] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:40.700 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.700 11:03:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 125137 00:08:41.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (125137) - No such process 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.640 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.641 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.641 { 00:08:41.641 "params": { 00:08:41.641 "name": "Nvme$subsystem", 00:08:41.641 "trtype": "$TEST_TRANSPORT", 00:08:41.641 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.641 "adrfam": "ipv4", 00:08:41.641 "trsvcid": "$NVMF_PORT", 00:08:41.641 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.641 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.641 "hdgst": ${hdgst:-false}, 00:08:41.641 "ddgst": ${ddgst:-false} 00:08:41.641 }, 00:08:41.641 "method": "bdev_nvme_attach_controller" 00:08:41.641 } 00:08:41.641 EOF 00:08:41.641 )") 00:08:41.641 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:41.641 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:41.641 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:41.641 11:03:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.641 "params": { 00:08:41.641 "name": "Nvme0", 00:08:41.641 "trtype": "tcp", 00:08:41.641 "traddr": "10.0.0.2", 00:08:41.641 "adrfam": "ipv4", 00:08:41.641 "trsvcid": "4420", 00:08:41.641 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.641 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.641 "hdgst": false, 00:08:41.641 "ddgst": false 00:08:41.641 }, 00:08:41.641 "method": "bdev_nvme_attach_controller" 00:08:41.641 }' 00:08:41.901 [2024-11-17 11:03:06.324012] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:41.901 [2024-11-17 11:03:06.324083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125300 ] 00:08:41.901 [2024-11-17 11:03:06.395200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.901 [2024-11-17 11:03:06.441990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.161 Running I/O for 1 seconds... 00:08:43.099 1664.00 IOPS, 104.00 MiB/s 00:08:43.099 Latency(us) 00:08:43.099 [2024-11-17T10:03:07.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.099 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:43.099 Verification LBA range: start 0x0 length 0x400 00:08:43.099 Nvme0n1 : 1.02 1697.80 106.11 0.00 0.00 37084.58 4611.79 33204.91 00:08:43.099 [2024-11-17T10:03:07.757Z] =================================================================================================================== 00:08:43.099 [2024-11-17T10:03:07.757Z] Total : 1697.80 106.11 0.00 0.00 37084.58 4611.79 33204.91 00:08:43.358 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:43.358 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:43.358 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:43.358 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:43.358 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:43.358 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.359 rmmod nvme_tcp 00:08:43.359 rmmod nvme_fabrics 00:08:43.359 rmmod nvme_keyring 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 124978 ']' 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 124978 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 124978 ']' 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 124978 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124978 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124978' 00:08:43.359 killing process with pid 124978 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 124978 00:08:43.359 11:03:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 124978 00:08:43.619 [2024-11-17 11:03:08.161699] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.619 11:03:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:46.158 00:08:46.158 real 0m8.767s 00:08:46.158 user 0m19.012s 00:08:46.158 sys 0m2.781s 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:46.158 ************************************ 00:08:46.158 END TEST nvmf_host_management 00:08:46.158 ************************************ 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.158 ************************************ 00:08:46.158 START TEST nvmf_lvol 00:08:46.158 ************************************ 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:46.158 * Looking for test storage... 00:08:46.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.158 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.159 --rc genhtml_branch_coverage=1 00:08:46.159 --rc genhtml_function_coverage=1 00:08:46.159 --rc genhtml_legend=1 00:08:46.159 --rc geninfo_all_blocks=1 00:08:46.159 --rc geninfo_unexecuted_blocks=1 00:08:46.159 00:08:46.159 ' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.159 --rc genhtml_branch_coverage=1 00:08:46.159 --rc genhtml_function_coverage=1 00:08:46.159 --rc genhtml_legend=1 00:08:46.159 --rc geninfo_all_blocks=1 00:08:46.159 --rc geninfo_unexecuted_blocks=1 00:08:46.159 00:08:46.159 ' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.159 --rc genhtml_branch_coverage=1 00:08:46.159 --rc genhtml_function_coverage=1 00:08:46.159 --rc genhtml_legend=1 00:08:46.159 --rc geninfo_all_blocks=1 00:08:46.159 --rc geninfo_unexecuted_blocks=1 00:08:46.159 00:08:46.159 ' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.159 --rc genhtml_branch_coverage=1 00:08:46.159 --rc genhtml_function_coverage=1 00:08:46.159 --rc genhtml_legend=1 00:08:46.159 --rc geninfo_all_blocks=1 00:08:46.159 --rc geninfo_unexecuted_blocks=1 00:08:46.159 00:08:46.159 ' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.159 11:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:48.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:48.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:48.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:48.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.065 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.066 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.326 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:08:48.326 00:08:48.326 --- 10.0.0.2 ping statistics --- 00:08:48.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.326 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:08:48.327 00:08:48.327 --- 10.0.0.1 ping statistics --- 00:08:48.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.327 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=128015 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 128015 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 128015 ']' 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.327 11:03:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.327 [2024-11-17 11:03:12.861878] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:08:48.327 [2024-11-17 11:03:12.861971] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.327 [2024-11-17 11:03:12.932290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:48.327 [2024-11-17 11:03:12.981092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.327 [2024-11-17 11:03:12.981155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.327 [2024-11-17 11:03:12.981169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.327 [2024-11-17 11:03:12.981181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.327 [2024-11-17 11:03:12.981191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.586 [2024-11-17 11:03:12.982731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.586 [2024-11-17 11:03:12.982761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.586 [2024-11-17 11:03:12.982765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.586 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:48.844 [2024-11-17 11:03:13.380317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.844 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.102 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:49.102 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:49.360 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:49.360 11:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:49.618 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:50.187 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f5e3451f-d831-417f-be29-9e6c1476b96d 00:08:50.188 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f5e3451f-d831-417f-be29-9e6c1476b96d lvol 20 00:08:50.188 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=baabc951-69bf-474a-8bbf-7d9056628ac2 00:08:50.188 11:03:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.447 11:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 baabc951-69bf-474a-8bbf-7d9056628ac2 00:08:50.706 11:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:51.273 [2024-11-17 11:03:15.623274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.273 11:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.273 11:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128440 00:08:51.273 11:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:51.273 11:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:52.669 11:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot baabc951-69bf-474a-8bbf-7d9056628ac2 MY_SNAPSHOT 00:08:52.669 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=403f5828-7e69-4ac9-a01c-2321d3b3d012 00:08:52.669 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize baabc951-69bf-474a-8bbf-7d9056628ac2 30 00:08:52.927 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 403f5828-7e69-4ac9-a01c-2321d3b3d012 MY_CLONE 00:08:53.497 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=383ec841-5dbe-4358-b0a0-faf9e4731d85 00:08:53.497 11:03:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 383ec841-5dbe-4358-b0a0-faf9e4731d85 00:08:54.065 11:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128440 00:09:02.203 Initializing NVMe Controllers 00:09:02.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:02.203 Controller IO queue size 128, less than required. 00:09:02.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:02.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:02.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:02.203 Initialization complete. Launching workers. 00:09:02.203 ======================================================== 00:09:02.203 Latency(us) 00:09:02.203 Device Information : IOPS MiB/s Average min max 00:09:02.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10544.20 41.19 12140.68 1389.52 132607.82 00:09:02.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10450.20 40.82 12252.10 2192.57 65279.17 00:09:02.203 ======================================================== 00:09:02.203 Total : 20994.40 82.01 12196.14 1389.52 132607.82 00:09:02.203 00:09:02.203 11:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:02.204 11:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete baabc951-69bf-474a-8bbf-7d9056628ac2 00:09:02.204 11:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5e3451f-d831-417f-be29-9e6c1476b96d 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:02.464 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:02.464 rmmod nvme_tcp 00:09:02.726 rmmod nvme_fabrics 00:09:02.726 rmmod nvme_keyring 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 128015 ']' 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 128015 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 128015 ']' 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 128015 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128015 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128015' 00:09:02.726 killing process with pid 128015 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 128015 00:09:02.726 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 128015 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.988 11:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:04.900 00:09:04.900 real 0m19.208s 00:09:04.900 user 1m4.988s 00:09:04.900 sys 0m5.738s 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.900 ************************************ 00:09:04.900 END TEST nvmf_lvol 00:09:04.900 ************************************ 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.900 ************************************ 00:09:04.900 START TEST nvmf_lvs_grow 00:09:04.900 ************************************ 00:09:04.900 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:05.170 * Looking for test storage... 00:09:05.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.170 --rc genhtml_branch_coverage=1 00:09:05.170 --rc genhtml_function_coverage=1 00:09:05.170 --rc genhtml_legend=1 00:09:05.170 --rc geninfo_all_blocks=1 00:09:05.170 --rc geninfo_unexecuted_blocks=1 00:09:05.170 00:09:05.170 ' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.170 --rc genhtml_branch_coverage=1 00:09:05.170 --rc genhtml_function_coverage=1 00:09:05.170 --rc genhtml_legend=1 00:09:05.170 --rc geninfo_all_blocks=1 00:09:05.170 --rc geninfo_unexecuted_blocks=1 00:09:05.170 00:09:05.170 ' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.170 --rc genhtml_branch_coverage=1 00:09:05.170 --rc genhtml_function_coverage=1 00:09:05.170 --rc genhtml_legend=1 00:09:05.170 --rc geninfo_all_blocks=1 00:09:05.170 --rc geninfo_unexecuted_blocks=1 00:09:05.170 00:09:05.170 ' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.170 --rc genhtml_branch_coverage=1 00:09:05.170 --rc genhtml_function_coverage=1 00:09:05.170 --rc genhtml_legend=1 00:09:05.170 --rc geninfo_all_blocks=1 00:09:05.170 --rc geninfo_unexecuted_blocks=1 00:09:05.170 00:09:05.170 ' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.170 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:05.171 11:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.720 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:07.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:07.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:07.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:07.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.721 11:03:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:09:07.721 00:09:07.721 --- 10.0.0.2 ping statistics --- 00:09:07.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.721 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:09:07.721 00:09:07.721 --- 10.0.0.1 ping statistics --- 00:09:07.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.721 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=131729 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 131729 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 131729 ']' 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.721 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.722 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.722 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.722 [2024-11-17 11:03:32.146024] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:07.722 [2024-11-17 11:03:32.146112] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.722 [2024-11-17 11:03:32.217693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.722 [2024-11-17 11:03:32.264854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.722 [2024-11-17 11:03:32.264925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.722 [2024-11-17 11:03:32.264938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.722 [2024-11-17 11:03:32.264950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.722 [2024-11-17 11:03:32.264960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.722 [2024-11-17 11:03:32.265612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.981 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.982 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:07.982 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:07.982 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:07.982 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.982 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.982 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.241 [2024-11-17 11:03:32.659417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:08.241 ************************************ 00:09:08.241 START TEST lvs_grow_clean 00:09:08.241 ************************************ 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:08.241 11:03:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.500 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:08.500 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:08.761 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=54200699-4600-4e52-b4d0-67cd33d196d6 00:09:08.761 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:08.761 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:09.023 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:09.023 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:09.023 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 54200699-4600-4e52-b4d0-67cd33d196d6 lvol 150 00:09:09.285 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb993514-ba22-4964-8ed4-83fce81485b7 00:09:09.285 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:09.285 11:03:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:09.547 [2024-11-17 11:03:34.080952] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:09.547 [2024-11-17 11:03:34.081049] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:09.547 true 00:09:09.547 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:09.547 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:09.810 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:09.810 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:10.077 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb993514-ba22-4964-8ed4-83fce81485b7 00:09:10.338 11:03:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:10.597 [2024-11-17 11:03:35.164312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.598 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132170 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132170 /var/tmp/bdevperf.sock 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 132170 ']' 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.857 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:10.857 [2024-11-17 11:03:35.485517] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:10.857 [2024-11-17 11:03:35.485606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132170 ] 00:09:11.116 [2024-11-17 11:03:35.551491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.116 [2024-11-17 11:03:35.596060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.116 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.116 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:11.116 11:03:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:11.687 Nvme0n1 00:09:11.687 11:03:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:11.949 [ 00:09:11.949 { 00:09:11.949 "name": "Nvme0n1", 00:09:11.949 "aliases": [ 00:09:11.949 "cb993514-ba22-4964-8ed4-83fce81485b7" 00:09:11.949 ], 00:09:11.949 "product_name": "NVMe disk", 00:09:11.949 "block_size": 4096, 00:09:11.949 "num_blocks": 38912, 00:09:11.949 "uuid": "cb993514-ba22-4964-8ed4-83fce81485b7", 00:09:11.949 "numa_id": 0, 00:09:11.949 "assigned_rate_limits": { 00:09:11.949 "rw_ios_per_sec": 0, 00:09:11.949 "rw_mbytes_per_sec": 0, 00:09:11.949 "r_mbytes_per_sec": 0, 00:09:11.949 "w_mbytes_per_sec": 0 00:09:11.949 }, 00:09:11.949 "claimed": false, 00:09:11.949 "zoned": false, 00:09:11.949 "supported_io_types": { 00:09:11.949 "read": true, 00:09:11.949 "write": true, 00:09:11.949 "unmap": true, 00:09:11.949 "flush": true, 00:09:11.949 "reset": true, 00:09:11.949 "nvme_admin": true, 00:09:11.949 "nvme_io": true, 00:09:11.949 "nvme_io_md": false, 00:09:11.949 "write_zeroes": true, 00:09:11.949 "zcopy": false, 00:09:11.949 "get_zone_info": false, 00:09:11.949 "zone_management": false, 00:09:11.949 "zone_append": false, 00:09:11.949 "compare": true, 00:09:11.949 "compare_and_write": true, 00:09:11.949 "abort": true, 00:09:11.949 "seek_hole": false, 00:09:11.949 "seek_data": false, 00:09:11.949 "copy": true, 00:09:11.949 "nvme_iov_md": false 00:09:11.949 }, 00:09:11.949 "memory_domains": [ 00:09:11.949 { 00:09:11.949 "dma_device_id": "system", 00:09:11.949 "dma_device_type": 1 00:09:11.949 } 00:09:11.949 ], 00:09:11.949 "driver_specific": { 00:09:11.949 "nvme": [ 00:09:11.949 { 00:09:11.949 "trid": { 00:09:11.949 "trtype": "TCP", 00:09:11.949 "adrfam": "IPv4", 00:09:11.949 "traddr": "10.0.0.2", 00:09:11.949 "trsvcid": "4420", 00:09:11.949 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:11.949 }, 00:09:11.949 "ctrlr_data": { 00:09:11.949 "cntlid": 1, 00:09:11.949 "vendor_id": "0x8086", 00:09:11.949 "model_number": "SPDK bdev Controller", 00:09:11.949 "serial_number": "SPDK0", 00:09:11.949 "firmware_revision": "25.01", 00:09:11.949 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:11.949 "oacs": { 00:09:11.949 "security": 0, 00:09:11.949 "format": 0, 00:09:11.949 "firmware": 0, 00:09:11.949 "ns_manage": 0 00:09:11.949 }, 00:09:11.949 "multi_ctrlr": true, 00:09:11.949 "ana_reporting": false 00:09:11.949 }, 00:09:11.949 "vs": { 00:09:11.949 "nvme_version": "1.3" 00:09:11.949 }, 00:09:11.949 "ns_data": { 00:09:11.949 "id": 1, 00:09:11.949 "can_share": true 00:09:11.949 } 00:09:11.949 } 00:09:11.949 ], 00:09:11.949 "mp_policy": "active_passive" 00:09:11.949 } 00:09:11.949 } 00:09:11.949 ] 00:09:11.949 11:03:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132303 00:09:11.949 11:03:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:11.949 11:03:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:11.949 Running I/O for 10 seconds... 00:09:12.894 Latency(us) 00:09:12.894 [2024-11-17T10:03:37.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.894 Nvme0n1 : 1.00 15535.00 60.68 0.00 0.00 0.00 0.00 0.00 00:09:12.894 [2024-11-17T10:03:37.552Z] =================================================================================================================== 00:09:12.894 [2024-11-17T10:03:37.552Z] Total : 15535.00 60.68 0.00 0.00 0.00 0.00 0.00 00:09:12.894 00:09:13.832 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:13.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.832 Nvme0n1 : 2.00 15741.00 61.49 0.00 0.00 0.00 0.00 0.00 00:09:13.832 [2024-11-17T10:03:38.490Z] =================================================================================================================== 00:09:13.832 [2024-11-17T10:03:38.490Z] Total : 15741.00 61.49 0.00 0.00 0.00 0.00 0.00 00:09:13.832 00:09:14.092 true 00:09:14.092 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:14.092 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:14.353 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:14.353 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:14.353 11:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132303 00:09:14.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.924 Nvme0n1 : 3.00 15871.33 62.00 0.00 0.00 0.00 0.00 0.00 00:09:14.924 [2024-11-17T10:03:39.582Z] =================================================================================================================== 00:09:14.924 [2024-11-17T10:03:39.582Z] Total : 15871.33 62.00 0.00 0.00 0.00 0.00 0.00 00:09:14.924 00:09:15.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.865 Nvme0n1 : 4.00 15905.50 62.13 0.00 0.00 0.00 0.00 0.00 00:09:15.865 [2024-11-17T10:03:40.523Z] =================================================================================================================== 00:09:15.865 [2024-11-17T10:03:40.523Z] Total : 15905.50 62.13 0.00 0.00 0.00 0.00 0.00 00:09:15.865 00:09:17.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.253 Nvme0n1 : 5.00 15950.80 62.31 0.00 0.00 0.00 0.00 0.00 00:09:17.253 [2024-11-17T10:03:41.911Z] =================================================================================================================== 00:09:17.253 [2024-11-17T10:03:41.911Z] Total : 15950.80 62.31 0.00 0.00 0.00 0.00 0.00 00:09:17.253 00:09:18.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.197 Nvme0n1 : 6.00 16013.17 62.55 0.00 0.00 0.00 0.00 0.00 00:09:18.197 [2024-11-17T10:03:42.855Z] =================================================================================================================== 00:09:18.197 [2024-11-17T10:03:42.855Z] Total : 16013.17 62.55 0.00 0.00 0.00 0.00 0.00 00:09:18.197 00:09:19.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.141 Nvme0n1 : 7.00 16075.43 62.79 0.00 0.00 0.00 0.00 0.00 00:09:19.141 [2024-11-17T10:03:43.799Z] =================================================================================================================== 00:09:19.141 [2024-11-17T10:03:43.799Z] Total : 16075.43 62.79 0.00 0.00 0.00 0.00 0.00 00:09:19.141 00:09:20.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.083 Nvme0n1 : 8.00 16114.75 62.95 0.00 0.00 0.00 0.00 0.00 00:09:20.083 [2024-11-17T10:03:44.741Z] =================================================================================================================== 00:09:20.083 [2024-11-17T10:03:44.741Z] Total : 16114.75 62.95 0.00 0.00 0.00 0.00 0.00 00:09:20.083 00:09:21.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.026 Nvme0n1 : 9.00 16151.78 63.09 0.00 0.00 0.00 0.00 0.00 00:09:21.026 [2024-11-17T10:03:45.684Z] =================================================================================================================== 00:09:21.026 [2024-11-17T10:03:45.684Z] Total : 16151.78 63.09 0.00 0.00 0.00 0.00 0.00 00:09:21.026 00:09:21.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.972 Nvme0n1 : 10.00 16168.70 63.16 0.00 0.00 0.00 0.00 0.00 00:09:21.972 [2024-11-17T10:03:46.630Z] =================================================================================================================== 00:09:21.972 [2024-11-17T10:03:46.630Z] Total : 16168.70 63.16 0.00 0.00 0.00 0.00 0.00 00:09:21.972 00:09:21.972 00:09:21.972 Latency(us) 00:09:21.972 [2024-11-17T10:03:46.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.972 Nvme0n1 : 10.00 16176.26 63.19 0.00 0.00 7908.49 2305.90 19029.71 00:09:21.972 [2024-11-17T10:03:46.630Z] =================================================================================================================== 00:09:21.972 [2024-11-17T10:03:46.630Z] Total : 16176.26 63.19 0.00 0.00 7908.49 2305.90 19029.71 00:09:21.972 { 00:09:21.972 "results": [ 00:09:21.972 { 00:09:21.972 "job": "Nvme0n1", 00:09:21.972 "core_mask": "0x2", 00:09:21.972 "workload": "randwrite", 00:09:21.972 "status": "finished", 00:09:21.972 "queue_depth": 128, 00:09:21.972 "io_size": 4096, 00:09:21.972 "runtime": 10.00324, 00:09:21.972 "iops": 16176.258892118953, 00:09:21.972 "mibps": 63.18851129733966, 00:09:21.972 "io_failed": 0, 00:09:21.972 "io_timeout": 0, 00:09:21.972 "avg_latency_us": 7908.485910874444, 00:09:21.972 "min_latency_us": 2305.8962962962964, 00:09:21.972 "max_latency_us": 19029.712592592594 00:09:21.972 } 00:09:21.972 ], 00:09:21.972 "core_count": 1 00:09:21.972 } 00:09:21.972 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132170 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 132170 ']' 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 132170 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132170 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132170' 00:09:21.973 killing process with pid 132170 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 132170 00:09:21.973 Received shutdown signal, test time was about 10.000000 seconds 00:09:21.973 00:09:21.973 Latency(us) 00:09:21.973 [2024-11-17T10:03:46.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.973 [2024-11-17T10:03:46.631Z] =================================================================================================================== 00:09:21.973 [2024-11-17T10:03:46.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:21.973 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 132170 00:09:22.232 11:03:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:22.492 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.751 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:22.751 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.012 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.012 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:23.012 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.274 [2024-11-17 11:03:47.840412] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:23.274 11:03:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:23.534 request: 00:09:23.534 { 00:09:23.534 "uuid": "54200699-4600-4e52-b4d0-67cd33d196d6", 00:09:23.534 "method": "bdev_lvol_get_lvstores", 00:09:23.534 "req_id": 1 00:09:23.534 } 00:09:23.534 Got JSON-RPC error response 00:09:23.534 response: 00:09:23.534 { 00:09:23.534 "code": -19, 00:09:23.534 "message": "No such device" 00:09:23.534 } 00:09:23.534 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:23.534 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:23.534 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:23.534 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:23.534 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:23.793 aio_bdev 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb993514-ba22-4964-8ed4-83fce81485b7 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cb993514-ba22-4964-8ed4-83fce81485b7 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.793 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:24.052 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb993514-ba22-4964-8ed4-83fce81485b7 -t 2000 00:09:24.310 [ 00:09:24.310 { 00:09:24.310 "name": "cb993514-ba22-4964-8ed4-83fce81485b7", 00:09:24.310 "aliases": [ 00:09:24.311 "lvs/lvol" 00:09:24.311 ], 00:09:24.311 "product_name": "Logical Volume", 00:09:24.311 "block_size": 4096, 00:09:24.311 "num_blocks": 38912, 00:09:24.311 "uuid": "cb993514-ba22-4964-8ed4-83fce81485b7", 00:09:24.311 "assigned_rate_limits": { 00:09:24.311 "rw_ios_per_sec": 0, 00:09:24.311 "rw_mbytes_per_sec": 0, 00:09:24.311 "r_mbytes_per_sec": 0, 00:09:24.311 "w_mbytes_per_sec": 0 00:09:24.311 }, 00:09:24.311 "claimed": false, 00:09:24.311 "zoned": false, 00:09:24.311 "supported_io_types": { 00:09:24.311 "read": true, 00:09:24.311 "write": true, 00:09:24.311 "unmap": true, 00:09:24.311 "flush": false, 00:09:24.311 "reset": true, 00:09:24.311 "nvme_admin": false, 00:09:24.311 "nvme_io": false, 00:09:24.311 "nvme_io_md": false, 00:09:24.311 "write_zeroes": true, 00:09:24.311 "zcopy": false, 00:09:24.311 "get_zone_info": false, 00:09:24.311 "zone_management": false, 00:09:24.311 "zone_append": false, 00:09:24.311 "compare": false, 00:09:24.311 "compare_and_write": false, 00:09:24.311 "abort": false, 00:09:24.311 "seek_hole": true, 00:09:24.311 "seek_data": true, 00:09:24.311 "copy": false, 00:09:24.311 "nvme_iov_md": false 00:09:24.311 }, 00:09:24.311 "driver_specific": { 00:09:24.311 "lvol": { 00:09:24.311 "lvol_store_uuid": "54200699-4600-4e52-b4d0-67cd33d196d6", 00:09:24.311 "base_bdev": "aio_bdev", 00:09:24.311 "thin_provision": false, 00:09:24.311 "num_allocated_clusters": 38, 00:09:24.311 "snapshot": false, 00:09:24.311 "clone": false, 00:09:24.311 "esnap_clone": false 00:09:24.311 } 00:09:24.311 } 00:09:24.311 } 00:09:24.311 ] 00:09:24.311 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:24.311 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:24.311 11:03:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:24.570 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:24.570 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:24.570 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:24.827 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:24.827 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb993514-ba22-4964-8ed4-83fce81485b7 00:09:25.088 11:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54200699-4600-4e52-b4d0-67cd33d196d6 00:09:25.662 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:25.663 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.923 00:09:25.923 real 0m17.619s 00:09:25.923 user 0m16.353s 00:09:25.923 sys 0m2.179s 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:25.923 ************************************ 00:09:25.923 END TEST lvs_grow_clean 00:09:25.923 ************************************ 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:25.923 ************************************ 00:09:25.923 START TEST lvs_grow_dirty 00:09:25.923 ************************************ 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:25.923 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:26.184 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:26.184 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:26.446 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:26.446 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:26.446 11:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:26.706 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:26.706 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:26.706 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2f45687-4390-4a64-9c8c-80f1ed233500 lvol 150 00:09:26.965 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:26.965 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:26.965 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:27.224 [2024-11-17 11:03:51.755964] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:27.224 [2024-11-17 11:03:51.756062] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:27.224 true 00:09:27.224 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:27.224 11:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:27.485 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:27.485 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:27.744 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:28.004 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:28.263 [2024-11-17 11:03:52.815233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.263 11:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134354 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134354 /var/tmp/bdevperf.sock 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 134354 ']' 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:28.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.522 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.522 [2024-11-17 11:03:53.135997] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:28.522 [2024-11-17 11:03:53.136073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134354 ] 00:09:28.780 [2024-11-17 11:03:53.201172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.780 [2024-11-17 11:03:53.245950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.781 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.781 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:28.781 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:29.040 Nvme0n1 00:09:29.040 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:29.611 [ 00:09:29.611 { 00:09:29.611 "name": "Nvme0n1", 00:09:29.611 "aliases": [ 00:09:29.611 "0cfa3583-1bba-4c46-9dd3-b87571d03c52" 00:09:29.611 ], 00:09:29.611 "product_name": "NVMe disk", 00:09:29.611 "block_size": 4096, 00:09:29.611 "num_blocks": 38912, 00:09:29.611 "uuid": "0cfa3583-1bba-4c46-9dd3-b87571d03c52", 00:09:29.611 "numa_id": 0, 00:09:29.611 "assigned_rate_limits": { 00:09:29.611 "rw_ios_per_sec": 0, 00:09:29.611 "rw_mbytes_per_sec": 0, 00:09:29.611 "r_mbytes_per_sec": 0, 00:09:29.611 "w_mbytes_per_sec": 0 00:09:29.611 }, 00:09:29.611 "claimed": false, 00:09:29.611 "zoned": false, 00:09:29.611 "supported_io_types": { 00:09:29.611 "read": true, 00:09:29.611 "write": true, 00:09:29.611 "unmap": true, 00:09:29.611 "flush": true, 00:09:29.611 "reset": true, 00:09:29.612 "nvme_admin": true, 00:09:29.612 "nvme_io": true, 00:09:29.612 "nvme_io_md": false, 00:09:29.612 "write_zeroes": true, 00:09:29.612 "zcopy": false, 00:09:29.612 "get_zone_info": false, 00:09:29.612 "zone_management": false, 00:09:29.612 "zone_append": false, 00:09:29.612 "compare": true, 00:09:29.612 "compare_and_write": true, 00:09:29.612 "abort": true, 00:09:29.612 "seek_hole": false, 00:09:29.612 "seek_data": false, 00:09:29.612 "copy": true, 00:09:29.612 "nvme_iov_md": false 00:09:29.612 }, 00:09:29.612 "memory_domains": [ 00:09:29.612 { 00:09:29.612 "dma_device_id": "system", 00:09:29.612 "dma_device_type": 1 00:09:29.612 } 00:09:29.612 ], 00:09:29.612 "driver_specific": { 00:09:29.612 "nvme": [ 00:09:29.612 { 00:09:29.612 "trid": { 00:09:29.612 "trtype": "TCP", 00:09:29.612 "adrfam": "IPv4", 00:09:29.612 "traddr": "10.0.0.2", 00:09:29.612 "trsvcid": "4420", 00:09:29.612 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:29.612 }, 00:09:29.612 "ctrlr_data": { 00:09:29.612 "cntlid": 1, 00:09:29.612 "vendor_id": "0x8086", 00:09:29.612 "model_number": "SPDK bdev Controller", 00:09:29.612 "serial_number": "SPDK0", 00:09:29.612 "firmware_revision": "25.01", 00:09:29.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:29.612 "oacs": { 00:09:29.612 "security": 0, 00:09:29.612 "format": 0, 00:09:29.612 "firmware": 0, 00:09:29.612 "ns_manage": 0 00:09:29.612 }, 00:09:29.612 "multi_ctrlr": true, 00:09:29.612 "ana_reporting": false 00:09:29.612 }, 00:09:29.612 "vs": { 00:09:29.612 "nvme_version": "1.3" 00:09:29.612 }, 00:09:29.612 "ns_data": { 00:09:29.612 "id": 1, 00:09:29.612 "can_share": true 00:09:29.612 } 00:09:29.612 } 00:09:29.612 ], 00:09:29.612 "mp_policy": "active_passive" 00:09:29.612 } 00:09:29.612 } 00:09:29.612 ] 00:09:29.612 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134485 00:09:29.612 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:29.612 11:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.612 Running I/O for 10 seconds... 00:09:30.556 Latency(us) 00:09:30.556 [2024-11-17T10:03:55.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.556 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:09:30.556 [2024-11-17T10:03:55.214Z] =================================================================================================================== 00:09:30.556 [2024-11-17T10:03:55.214Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:09:30.556 00:09:31.499 11:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:31.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.500 Nvme0n1 : 2.00 15472.00 60.44 0.00 0.00 0.00 0.00 0.00 00:09:31.500 [2024-11-17T10:03:56.158Z] =================================================================================================================== 00:09:31.500 [2024-11-17T10:03:56.158Z] Total : 15472.00 60.44 0.00 0.00 0.00 0.00 0.00 00:09:31.500 00:09:31.758 true 00:09:31.758 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:31.758 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:32.020 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:32.020 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:32.020 11:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134485 00:09:32.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.592 Nvme0n1 : 3.00 15564.00 60.80 0.00 0.00 0.00 0.00 0.00 00:09:32.592 [2024-11-17T10:03:57.250Z] =================================================================================================================== 00:09:32.592 [2024-11-17T10:03:57.250Z] Total : 15564.00 60.80 0.00 0.00 0.00 0.00 0.00 00:09:32.592 00:09:33.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.538 Nvme0n1 : 4.00 15705.25 61.35 0.00 0.00 0.00 0.00 0.00 00:09:33.538 [2024-11-17T10:03:58.196Z] =================================================================================================================== 00:09:33.538 [2024-11-17T10:03:58.196Z] Total : 15705.25 61.35 0.00 0.00 0.00 0.00 0.00 00:09:33.538 00:09:34.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.481 Nvme0n1 : 5.00 15771.40 61.61 0.00 0.00 0.00 0.00 0.00 00:09:34.481 [2024-11-17T10:03:59.139Z] =================================================================================================================== 00:09:34.481 [2024-11-17T10:03:59.139Z] Total : 15771.40 61.61 0.00 0.00 0.00 0.00 0.00 00:09:34.481 00:09:35.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.872 Nvme0n1 : 6.00 15841.83 61.88 0.00 0.00 0.00 0.00 0.00 00:09:35.872 [2024-11-17T10:04:00.530Z] =================================================================================================================== 00:09:35.872 [2024-11-17T10:04:00.530Z] Total : 15841.83 61.88 0.00 0.00 0.00 0.00 0.00 00:09:35.872 00:09:36.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.818 Nvme0n1 : 7.00 15874.00 62.01 0.00 0.00 0.00 0.00 0.00 00:09:36.818 [2024-11-17T10:04:01.476Z] =================================================================================================================== 00:09:36.818 [2024-11-17T10:04:01.476Z] Total : 15874.00 62.01 0.00 0.00 0.00 0.00 0.00 00:09:36.818 00:09:37.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.762 Nvme0n1 : 8.00 15929.88 62.23 0.00 0.00 0.00 0.00 0.00 00:09:37.762 [2024-11-17T10:04:02.420Z] =================================================================================================================== 00:09:37.762 [2024-11-17T10:04:02.420Z] Total : 15929.88 62.23 0.00 0.00 0.00 0.00 0.00 00:09:37.762 00:09:38.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.705 Nvme0n1 : 9.00 15980.22 62.42 0.00 0.00 0.00 0.00 0.00 00:09:38.705 [2024-11-17T10:04:03.363Z] =================================================================================================================== 00:09:38.705 [2024-11-17T10:04:03.363Z] Total : 15980.22 62.42 0.00 0.00 0.00 0.00 0.00 00:09:38.705 00:09:39.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.653 Nvme0n1 : 10.00 16007.80 62.53 0.00 0.00 0.00 0.00 0.00 00:09:39.653 [2024-11-17T10:04:04.311Z] =================================================================================================================== 00:09:39.653 [2024-11-17T10:04:04.311Z] Total : 16007.80 62.53 0.00 0.00 0.00 0.00 0.00 00:09:39.653 00:09:39.653 00:09:39.653 Latency(us) 00:09:39.653 [2024-11-17T10:04:04.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.653 Nvme0n1 : 10.01 16007.44 62.53 0.00 0.00 7991.93 4223.43 17864.63 00:09:39.653 [2024-11-17T10:04:04.311Z] =================================================================================================================== 00:09:39.653 [2024-11-17T10:04:04.311Z] Total : 16007.44 62.53 0.00 0.00 7991.93 4223.43 17864.63 00:09:39.653 { 00:09:39.653 "results": [ 00:09:39.653 { 00:09:39.653 "job": "Nvme0n1", 00:09:39.653 "core_mask": "0x2", 00:09:39.653 "workload": "randwrite", 00:09:39.653 "status": "finished", 00:09:39.653 "queue_depth": 128, 00:09:39.653 "io_size": 4096, 00:09:39.653 "runtime": 10.008219, 00:09:39.653 "iops": 16007.443482201978, 00:09:39.653 "mibps": 62.529076102351475, 00:09:39.653 "io_failed": 0, 00:09:39.653 "io_timeout": 0, 00:09:39.653 "avg_latency_us": 7991.9250430256225, 00:09:39.653 "min_latency_us": 4223.431111111111, 00:09:39.653 "max_latency_us": 17864.62814814815 00:09:39.653 } 00:09:39.653 ], 00:09:39.653 "core_count": 1 00:09:39.653 } 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134354 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 134354 ']' 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 134354 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134354 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134354' 00:09:39.653 killing process with pid 134354 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 134354 00:09:39.653 Received shutdown signal, test time was about 10.000000 seconds 00:09:39.653 00:09:39.653 Latency(us) 00:09:39.653 [2024-11-17T10:04:04.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.653 [2024-11-17T10:04:04.311Z] =================================================================================================================== 00:09:39.653 [2024-11-17T10:04:04.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:39.653 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 134354 00:09:39.913 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:40.172 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:40.430 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:40.430 11:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:40.691 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:40.691 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:40.691 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 131729 00:09:40.691 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 131729 00:09:40.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 131729 Killed "${NVMF_APP[@]}" "$@" 00:09:40.691 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=135804 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 135804 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 135804 ']' 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.692 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.692 [2024-11-17 11:04:05.281333] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:40.692 [2024-11-17 11:04:05.281420] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.951 [2024-11-17 11:04:05.353323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.951 [2024-11-17 11:04:05.401105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.951 [2024-11-17 11:04:05.401172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.951 [2024-11-17 11:04:05.401184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.951 [2024-11-17 11:04:05.401196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.951 [2024-11-17 11:04:05.401205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.951 [2024-11-17 11:04:05.401819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.951 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:41.213 [2024-11-17 11:04:05.787598] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:41.213 [2024-11-17 11:04:05.787722] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:41.213 [2024-11-17 11:04:05.787768] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.213 11:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.472 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0cfa3583-1bba-4c46-9dd3-b87571d03c52 -t 2000 00:09:41.731 [ 00:09:41.731 { 00:09:41.731 "name": "0cfa3583-1bba-4c46-9dd3-b87571d03c52", 00:09:41.731 "aliases": [ 00:09:41.731 "lvs/lvol" 00:09:41.731 ], 00:09:41.731 "product_name": "Logical Volume", 00:09:41.731 "block_size": 4096, 00:09:41.731 "num_blocks": 38912, 00:09:41.731 "uuid": "0cfa3583-1bba-4c46-9dd3-b87571d03c52", 00:09:41.731 "assigned_rate_limits": { 00:09:41.731 "rw_ios_per_sec": 0, 00:09:41.731 "rw_mbytes_per_sec": 0, 00:09:41.731 "r_mbytes_per_sec": 0, 00:09:41.731 "w_mbytes_per_sec": 0 00:09:41.731 }, 00:09:41.731 "claimed": false, 00:09:41.731 "zoned": false, 00:09:41.731 "supported_io_types": { 00:09:41.731 "read": true, 00:09:41.731 "write": true, 00:09:41.731 "unmap": true, 00:09:41.731 "flush": false, 00:09:41.731 "reset": true, 00:09:41.731 "nvme_admin": false, 00:09:41.731 "nvme_io": false, 00:09:41.731 "nvme_io_md": false, 00:09:41.731 "write_zeroes": true, 00:09:41.731 "zcopy": false, 00:09:41.731 "get_zone_info": false, 00:09:41.731 "zone_management": false, 00:09:41.731 "zone_append": false, 00:09:41.731 "compare": false, 00:09:41.731 "compare_and_write": false, 00:09:41.731 "abort": false, 00:09:41.731 "seek_hole": true, 00:09:41.731 "seek_data": true, 00:09:41.731 "copy": false, 00:09:41.731 "nvme_iov_md": false 00:09:41.731 }, 00:09:41.731 "driver_specific": { 00:09:41.731 "lvol": { 00:09:41.731 "lvol_store_uuid": "d2f45687-4390-4a64-9c8c-80f1ed233500", 00:09:41.731 "base_bdev": "aio_bdev", 00:09:41.731 "thin_provision": false, 00:09:41.731 "num_allocated_clusters": 38, 00:09:41.731 "snapshot": false, 00:09:41.731 "clone": false, 00:09:41.731 "esnap_clone": false 00:09:41.731 } 00:09:41.731 } 00:09:41.731 } 00:09:41.731 ] 00:09:41.731 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:41.731 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:41.732 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:41.991 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:41.991 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:41.991 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:42.252 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:42.252 11:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.514 [2024-11-17 11:04:07.137440] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:42.774 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:43.035 request: 00:09:43.035 { 00:09:43.035 "uuid": "d2f45687-4390-4a64-9c8c-80f1ed233500", 00:09:43.035 "method": "bdev_lvol_get_lvstores", 00:09:43.035 "req_id": 1 00:09:43.035 } 00:09:43.035 Got JSON-RPC error response 00:09:43.035 response: 00:09:43.035 { 00:09:43.035 "code": -19, 00:09:43.035 "message": "No such device" 00:09:43.035 } 00:09:43.035 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:43.035 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.035 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.035 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.035 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.297 aio_bdev 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.297 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.559 11:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0cfa3583-1bba-4c46-9dd3-b87571d03c52 -t 2000 00:09:43.821 [ 00:09:43.821 { 00:09:43.821 "name": "0cfa3583-1bba-4c46-9dd3-b87571d03c52", 00:09:43.821 "aliases": [ 00:09:43.821 "lvs/lvol" 00:09:43.821 ], 00:09:43.821 "product_name": "Logical Volume", 00:09:43.821 "block_size": 4096, 00:09:43.821 "num_blocks": 38912, 00:09:43.821 "uuid": "0cfa3583-1bba-4c46-9dd3-b87571d03c52", 00:09:43.821 "assigned_rate_limits": { 00:09:43.821 "rw_ios_per_sec": 0, 00:09:43.821 "rw_mbytes_per_sec": 0, 00:09:43.821 "r_mbytes_per_sec": 0, 00:09:43.821 "w_mbytes_per_sec": 0 00:09:43.821 }, 00:09:43.821 "claimed": false, 00:09:43.821 "zoned": false, 00:09:43.821 "supported_io_types": { 00:09:43.821 "read": true, 00:09:43.821 "write": true, 00:09:43.821 "unmap": true, 00:09:43.821 "flush": false, 00:09:43.821 "reset": true, 00:09:43.821 "nvme_admin": false, 00:09:43.821 "nvme_io": false, 00:09:43.821 "nvme_io_md": false, 00:09:43.821 "write_zeroes": true, 00:09:43.821 "zcopy": false, 00:09:43.821 "get_zone_info": false, 00:09:43.821 "zone_management": false, 00:09:43.821 "zone_append": false, 00:09:43.821 "compare": false, 00:09:43.821 "compare_and_write": false, 00:09:43.821 "abort": false, 00:09:43.821 "seek_hole": true, 00:09:43.821 "seek_data": true, 00:09:43.821 "copy": false, 00:09:43.821 "nvme_iov_md": false 00:09:43.821 }, 00:09:43.821 "driver_specific": { 00:09:43.821 "lvol": { 00:09:43.821 "lvol_store_uuid": "d2f45687-4390-4a64-9c8c-80f1ed233500", 00:09:43.821 "base_bdev": "aio_bdev", 00:09:43.821 "thin_provision": false, 00:09:43.821 "num_allocated_clusters": 38, 00:09:43.821 "snapshot": false, 00:09:43.821 "clone": false, 00:09:43.821 "esnap_clone": false 00:09:43.821 } 00:09:43.821 } 00:09:43.821 } 00:09:43.821 ] 00:09:43.821 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:43.821 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:43.821 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.082 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.082 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:44.082 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.343 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.344 11:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0cfa3583-1bba-4c46-9dd3-b87571d03c52 00:09:44.606 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2f45687-4390-4a64-9c8c-80f1ed233500 00:09:44.866 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.125 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.125 00:09:45.125 real 0m19.253s 00:09:45.125 user 0m48.743s 00:09:45.125 sys 0m4.604s 00:09:45.125 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.125 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:45.125 ************************************ 00:09:45.125 END TEST lvs_grow_dirty 00:09:45.125 ************************************ 00:09:45.125 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:45.126 nvmf_trace.0 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.126 rmmod nvme_tcp 00:09:45.126 rmmod nvme_fabrics 00:09:45.126 rmmod nvme_keyring 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 135804 ']' 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 135804 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 135804 ']' 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 135804 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.126 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135804 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135804' 00:09:45.385 killing process with pid 135804 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 135804 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 135804 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.385 11:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.927 00:09:47.927 real 0m42.470s 00:09:47.927 user 1m11.110s 00:09:47.927 sys 0m8.853s 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:47.927 ************************************ 00:09:47.927 END TEST nvmf_lvs_grow 00:09:47.927 ************************************ 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.927 ************************************ 00:09:47.927 START TEST nvmf_bdev_io_wait 00:09:47.927 ************************************ 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:47.927 * Looking for test storage... 00:09:47.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:47.927 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.928 --rc genhtml_branch_coverage=1 00:09:47.928 --rc genhtml_function_coverage=1 00:09:47.928 --rc genhtml_legend=1 00:09:47.928 --rc geninfo_all_blocks=1 00:09:47.928 --rc geninfo_unexecuted_blocks=1 00:09:47.928 00:09:47.928 ' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.928 --rc genhtml_branch_coverage=1 00:09:47.928 --rc genhtml_function_coverage=1 00:09:47.928 --rc genhtml_legend=1 00:09:47.928 --rc geninfo_all_blocks=1 00:09:47.928 --rc geninfo_unexecuted_blocks=1 00:09:47.928 00:09:47.928 ' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.928 --rc genhtml_branch_coverage=1 00:09:47.928 --rc genhtml_function_coverage=1 00:09:47.928 --rc genhtml_legend=1 00:09:47.928 --rc geninfo_all_blocks=1 00:09:47.928 --rc geninfo_unexecuted_blocks=1 00:09:47.928 00:09:47.928 ' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.928 --rc genhtml_branch_coverage=1 00:09:47.928 --rc genhtml_function_coverage=1 00:09:47.928 --rc genhtml_legend=1 00:09:47.928 --rc geninfo_all_blocks=1 00:09:47.928 --rc geninfo_unexecuted_blocks=1 00:09:47.928 00:09:47.928 ' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.928 11:04:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.840 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:49.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:49.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:49.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:49.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:09:49.841 00:09:49.841 --- 10.0.0.2 ping statistics --- 00:09:49.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.841 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:49.841 00:09:49.841 --- 10.0.0.1 ping statistics --- 00:09:49.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.841 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:49.841 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=138365 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 138365 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 138365 ']' 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.103 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.103 [2024-11-17 11:04:14.550634] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.103 [2024-11-17 11:04:14.550716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.103 [2024-11-17 11:04:14.621113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.103 [2024-11-17 11:04:14.672067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.103 [2024-11-17 11:04:14.672164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.103 [2024-11-17 11:04:14.672183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.103 [2024-11-17 11:04:14.672194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.103 [2024-11-17 11:04:14.672203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.103 [2024-11-17 11:04:14.673975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.103 [2024-11-17 11:04:14.674039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.103 [2024-11-17 11:04:14.674109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.103 [2024-11-17 11:04:14.674113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 [2024-11-17 11:04:14.894928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 Malloc0 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:50.364 [2024-11-17 11:04:14.948149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=138396 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=138397 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=138399 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.364 { 00:09:50.364 "params": { 00:09:50.364 "name": "Nvme$subsystem", 00:09:50.364 "trtype": "$TEST_TRANSPORT", 00:09:50.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.364 "adrfam": "ipv4", 00:09:50.364 "trsvcid": "$NVMF_PORT", 00:09:50.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.364 "hdgst": ${hdgst:-false}, 00:09:50.364 "ddgst": ${ddgst:-false} 00:09:50.364 }, 00:09:50.364 "method": "bdev_nvme_attach_controller" 00:09:50.364 } 00:09:50.364 EOF 00:09:50.364 )") 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=138402 00:09:50.364 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.365 { 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme$subsystem", 00:09:50.365 "trtype": "$TEST_TRANSPORT", 00:09:50.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "$NVMF_PORT", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.365 "hdgst": ${hdgst:-false}, 00:09:50.365 "ddgst": ${ddgst:-false} 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 } 00:09:50.365 EOF 00:09:50.365 )") 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.365 { 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme$subsystem", 00:09:50.365 "trtype": "$TEST_TRANSPORT", 00:09:50.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "$NVMF_PORT", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.365 "hdgst": ${hdgst:-false}, 00:09:50.365 "ddgst": ${ddgst:-false} 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 } 00:09:50.365 EOF 00:09:50.365 )") 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.365 { 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme$subsystem", 00:09:50.365 "trtype": "$TEST_TRANSPORT", 00:09:50.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "$NVMF_PORT", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.365 "hdgst": ${hdgst:-false}, 00:09:50.365 "ddgst": ${ddgst:-false} 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 } 00:09:50.365 EOF 00:09:50.365 )") 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 138396 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme1", 00:09:50.365 "trtype": "tcp", 00:09:50.365 "traddr": "10.0.0.2", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "4420", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.365 "hdgst": false, 00:09:50.365 "ddgst": false 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 }' 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme1", 00:09:50.365 "trtype": "tcp", 00:09:50.365 "traddr": "10.0.0.2", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "4420", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.365 "hdgst": false, 00:09:50.365 "ddgst": false 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 }' 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme1", 00:09:50.365 "trtype": "tcp", 00:09:50.365 "traddr": "10.0.0.2", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "4420", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.365 "hdgst": false, 00:09:50.365 "ddgst": false 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 }' 00:09:50.365 11:04:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.365 "params": { 00:09:50.365 "name": "Nvme1", 00:09:50.365 "trtype": "tcp", 00:09:50.365 "traddr": "10.0.0.2", 00:09:50.365 "adrfam": "ipv4", 00:09:50.365 "trsvcid": "4420", 00:09:50.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:50.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:50.365 "hdgst": false, 00:09:50.365 "ddgst": false 00:09:50.365 }, 00:09:50.365 "method": "bdev_nvme_attach_controller" 00:09:50.365 }' 00:09:50.365 [2024-11-17 11:04:14.998576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.365 [2024-11-17 11:04:14.998576] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.365 [2024-11-17 11:04:14.998592] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.365 [2024-11-17 11:04:14.998665] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 11:04:14.998666] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 11:04:14.998666] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:50.365 --proc-type=auto ] 00:09:50.365 --proc-type=auto ] 00:09:50.365 [2024-11-17 11:04:14.999649] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:50.365 [2024-11-17 11:04:14.999721] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:50.624 [2024-11-17 11:04:15.179219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.624 [2024-11-17 11:04:15.221365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:50.624 [2024-11-17 11:04:15.278292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.885 [2024-11-17 11:04:15.320320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:50.885 [2024-11-17 11:04:15.378220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.885 [2024-11-17 11:04:15.420656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:50.885 [2024-11-17 11:04:15.450022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.885 [2024-11-17 11:04:15.487199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:51.144 Running I/O for 1 seconds... 00:09:51.144 Running I/O for 1 seconds... 00:09:51.144 Running I/O for 1 seconds... 00:09:51.144 Running I/O for 1 seconds... 00:09:52.086 198824.00 IOPS, 776.66 MiB/s 00:09:52.086 Latency(us) 00:09:52.086 [2024-11-17T10:04:16.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.086 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:52.086 Nvme1n1 : 1.00 198448.80 775.19 0.00 0.00 641.58 286.72 1844.72 00:09:52.086 [2024-11-17T10:04:16.745Z] =================================================================================================================== 00:09:52.087 [2024-11-17T10:04:16.745Z] Total : 198448.80 775.19 0.00 0.00 641.58 286.72 1844.72 00:09:52.087 10116.00 IOPS, 39.52 MiB/s 00:09:52.087 Latency(us) 00:09:52.087 [2024-11-17T10:04:16.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.087 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:52.087 Nvme1n1 : 1.01 10172.41 39.74 0.00 0.00 12530.77 6310.87 22524.97 00:09:52.087 [2024-11-17T10:04:16.745Z] =================================================================================================================== 00:09:52.087 [2024-11-17T10:04:16.745Z] Total : 10172.41 39.74 0.00 0.00 12530.77 6310.87 22524.97 00:09:52.087 7454.00 IOPS, 29.12 MiB/s 00:09:52.087 Latency(us) 00:09:52.087 [2024-11-17T10:04:16.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.087 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:52.087 Nvme1n1 : 1.01 7507.36 29.33 0.00 0.00 16957.50 8932.31 28738.75 00:09:52.087 [2024-11-17T10:04:16.745Z] =================================================================================================================== 00:09:52.087 [2024-11-17T10:04:16.745Z] Total : 7507.36 29.33 0.00 0.00 16957.50 8932.31 28738.75 00:09:52.347 9757.00 IOPS, 38.11 MiB/s 00:09:52.347 Latency(us) 00:09:52.347 [2024-11-17T10:04:17.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.347 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:52.347 Nvme1n1 : 1.01 9825.06 38.38 0.00 0.00 12978.46 5170.06 24660.95 00:09:52.347 [2024-11-17T10:04:17.005Z] =================================================================================================================== 00:09:52.347 [2024-11-17T10:04:17.005Z] Total : 9825.06 38.38 0.00 0.00 12978.46 5170.06 24660.95 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 138397 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 138399 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 138402 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.347 rmmod nvme_tcp 00:09:52.347 rmmod nvme_fabrics 00:09:52.347 rmmod nvme_keyring 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 138365 ']' 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 138365 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 138365 ']' 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 138365 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138365 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138365' 00:09:52.347 killing process with pid 138365 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 138365 00:09:52.347 11:04:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 138365 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.609 11:04:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.159 00:09:55.159 real 0m7.137s 00:09:55.159 user 0m15.372s 00:09:55.159 sys 0m3.530s 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:55.159 ************************************ 00:09:55.159 END TEST nvmf_bdev_io_wait 00:09:55.159 ************************************ 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.159 ************************************ 00:09:55.159 START TEST nvmf_queue_depth 00:09:55.159 ************************************ 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:55.159 * Looking for test storage... 00:09:55.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.159 --rc genhtml_branch_coverage=1 00:09:55.159 --rc genhtml_function_coverage=1 00:09:55.159 --rc genhtml_legend=1 00:09:55.159 --rc geninfo_all_blocks=1 00:09:55.159 --rc geninfo_unexecuted_blocks=1 00:09:55.159 00:09:55.159 ' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.159 --rc genhtml_branch_coverage=1 00:09:55.159 --rc genhtml_function_coverage=1 00:09:55.159 --rc genhtml_legend=1 00:09:55.159 --rc geninfo_all_blocks=1 00:09:55.159 --rc geninfo_unexecuted_blocks=1 00:09:55.159 00:09:55.159 ' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.159 --rc genhtml_branch_coverage=1 00:09:55.159 --rc genhtml_function_coverage=1 00:09:55.159 --rc genhtml_legend=1 00:09:55.159 --rc geninfo_all_blocks=1 00:09:55.159 --rc geninfo_unexecuted_blocks=1 00:09:55.159 00:09:55.159 ' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.159 --rc genhtml_branch_coverage=1 00:09:55.159 --rc genhtml_function_coverage=1 00:09:55.159 --rc genhtml_legend=1 00:09:55.159 --rc geninfo_all_blocks=1 00:09:55.159 --rc geninfo_unexecuted_blocks=1 00:09:55.159 00:09:55.159 ' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.159 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.160 11:04:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.071 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:57.072 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:57.072 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:57.072 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:57.072 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:57.072 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:57.333 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:57.333 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:57.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:09:57.334 00:09:57.334 --- 10.0.0.2 ping statistics --- 00:09:57.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.334 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:57.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:09:57.334 00:09:57.334 --- 10.0.0.1 ping statistics --- 00:09:57.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.334 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=140628 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 140628 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140628 ']' 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.334 11:04:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.334 [2024-11-17 11:04:21.842680] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:57.334 [2024-11-17 11:04:21.842788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.334 [2024-11-17 11:04:21.916831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.334 [2024-11-17 11:04:21.964139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.334 [2024-11-17 11:04:21.964207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.334 [2024-11-17 11:04:21.964235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.334 [2024-11-17 11:04:21.964247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.334 [2024-11-17 11:04:21.964256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.334 [2024-11-17 11:04:21.964897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 [2024-11-17 11:04:22.109578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 Malloc0 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 [2024-11-17 11:04:22.157592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=140656 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 140656 /var/tmp/bdevperf.sock 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140656 ']' 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:57.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.595 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:57.595 [2024-11-17 11:04:22.203593] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:09:57.595 [2024-11-17 11:04:22.203673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140656 ] 00:09:57.856 [2024-11-17 11:04:22.273337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.856 [2024-11-17 11:04:22.319729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.856 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.856 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:57.856 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:57.856 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.856 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:58.117 NVMe0n1 00:09:58.117 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.118 11:04:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:58.118 Running I/O for 10 seconds... 00:10:00.461 8206.00 IOPS, 32.05 MiB/s [2024-11-17T10:04:25.687Z] 8698.50 IOPS, 33.98 MiB/s [2024-11-17T10:04:27.074Z] 8652.00 IOPS, 33.80 MiB/s [2024-11-17T10:04:28.017Z] 8698.50 IOPS, 33.98 MiB/s [2024-11-17T10:04:28.960Z] 8786.20 IOPS, 34.32 MiB/s [2024-11-17T10:04:29.905Z] 8816.00 IOPS, 34.44 MiB/s [2024-11-17T10:04:30.845Z] 8814.86 IOPS, 34.43 MiB/s [2024-11-17T10:04:31.788Z] 8815.25 IOPS, 34.43 MiB/s [2024-11-17T10:04:32.733Z] 8828.33 IOPS, 34.49 MiB/s [2024-11-17T10:04:32.995Z] 8807.50 IOPS, 34.40 MiB/s 00:10:08.337 Latency(us) 00:10:08.337 [2024-11-17T10:04:32.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.337 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:08.337 Verification LBA range: start 0x0 length 0x4000 00:10:08.337 NVMe0n1 : 10.06 8853.61 34.58 0.00 0.00 115177.98 8058.50 78837.38 00:10:08.337 [2024-11-17T10:04:32.995Z] =================================================================================================================== 00:10:08.337 [2024-11-17T10:04:32.995Z] Total : 8853.61 34.58 0.00 0.00 115177.98 8058.50 78837.38 00:10:08.337 { 00:10:08.337 "results": [ 00:10:08.337 { 00:10:08.337 "job": "NVMe0n1", 00:10:08.337 "core_mask": "0x1", 00:10:08.337 "workload": "verify", 00:10:08.337 "status": "finished", 00:10:08.337 "verify_range": { 00:10:08.337 "start": 0, 00:10:08.337 "length": 16384 00:10:08.337 }, 00:10:08.337 "queue_depth": 1024, 00:10:08.337 "io_size": 4096, 00:10:08.337 "runtime": 10.063577, 00:10:08.337 "iops": 8853.611394835058, 00:10:08.337 "mibps": 34.584419511074444, 00:10:08.337 "io_failed": 0, 00:10:08.337 "io_timeout": 0, 00:10:08.337 "avg_latency_us": 115177.97951758199, 00:10:08.337 "min_latency_us": 8058.500740740741, 00:10:08.337 "max_latency_us": 78837.38074074074 00:10:08.337 } 00:10:08.337 ], 00:10:08.337 "core_count": 1 00:10:08.337 } 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 140656 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140656 ']' 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140656 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140656 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140656' 00:10:08.337 killing process with pid 140656 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140656 00:10:08.337 Received shutdown signal, test time was about 10.000000 seconds 00:10:08.337 00:10:08.337 Latency(us) 00:10:08.337 [2024-11-17T10:04:32.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.337 [2024-11-17T10:04:32.995Z] =================================================================================================================== 00:10:08.337 [2024-11-17T10:04:32.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:08.337 11:04:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140656 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.599 rmmod nvme_tcp 00:10:08.599 rmmod nvme_fabrics 00:10:08.599 rmmod nvme_keyring 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 140628 ']' 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 140628 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140628 ']' 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140628 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140628 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140628' 00:10:08.599 killing process with pid 140628 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140628 00:10:08.599 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140628 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.862 11:04:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.782 00:10:10.782 real 0m16.111s 00:10:10.782 user 0m22.460s 00:10:10.782 sys 0m3.200s 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:10.782 ************************************ 00:10:10.782 END TEST nvmf_queue_depth 00:10:10.782 ************************************ 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.782 ************************************ 00:10:10.782 START TEST nvmf_target_multipath 00:10:10.782 ************************************ 00:10:10.782 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:11.043 * Looking for test storage... 00:10:11.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.043 --rc genhtml_branch_coverage=1 00:10:11.043 --rc genhtml_function_coverage=1 00:10:11.043 --rc genhtml_legend=1 00:10:11.043 --rc geninfo_all_blocks=1 00:10:11.043 --rc geninfo_unexecuted_blocks=1 00:10:11.043 00:10:11.043 ' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.043 --rc genhtml_branch_coverage=1 00:10:11.043 --rc genhtml_function_coverage=1 00:10:11.043 --rc genhtml_legend=1 00:10:11.043 --rc geninfo_all_blocks=1 00:10:11.043 --rc geninfo_unexecuted_blocks=1 00:10:11.043 00:10:11.043 ' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.043 --rc genhtml_branch_coverage=1 00:10:11.043 --rc genhtml_function_coverage=1 00:10:11.043 --rc genhtml_legend=1 00:10:11.043 --rc geninfo_all_blocks=1 00:10:11.043 --rc geninfo_unexecuted_blocks=1 00:10:11.043 00:10:11.043 ' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.043 --rc genhtml_branch_coverage=1 00:10:11.043 --rc genhtml_function_coverage=1 00:10:11.043 --rc genhtml_legend=1 00:10:11.043 --rc geninfo_all_blocks=1 00:10:11.043 --rc geninfo_unexecuted_blocks=1 00:10:11.043 00:10:11.043 ' 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.043 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.044 11:04:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:13.588 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:13.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:13.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:13.589 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:13.589 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:13.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:10:13.589 00:10:13.589 --- 10.0.0.2 ping statistics --- 00:10:13.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.589 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:10:13.589 00:10:13.589 --- 10.0.0.1 ping statistics --- 00:10:13.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.589 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.589 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:13.590 only one NIC for nvmf test 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.590 11:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.590 rmmod nvme_tcp 00:10:13.590 rmmod nvme_fabrics 00:10:13.590 rmmod nvme_keyring 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.590 11:04:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:15.499 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.500 00:10:15.500 real 0m4.683s 00:10:15.500 user 0m0.951s 00:10:15.500 sys 0m1.746s 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:15.500 ************************************ 00:10:15.500 END TEST nvmf_target_multipath 00:10:15.500 ************************************ 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.500 11:04:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.760 ************************************ 00:10:15.760 START TEST nvmf_zcopy 00:10:15.760 ************************************ 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:15.760 * Looking for test storage... 00:10:15.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.760 --rc genhtml_branch_coverage=1 00:10:15.760 --rc genhtml_function_coverage=1 00:10:15.760 --rc genhtml_legend=1 00:10:15.760 --rc geninfo_all_blocks=1 00:10:15.760 --rc geninfo_unexecuted_blocks=1 00:10:15.760 00:10:15.760 ' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.760 --rc genhtml_branch_coverage=1 00:10:15.760 --rc genhtml_function_coverage=1 00:10:15.760 --rc genhtml_legend=1 00:10:15.760 --rc geninfo_all_blocks=1 00:10:15.760 --rc geninfo_unexecuted_blocks=1 00:10:15.760 00:10:15.760 ' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.760 --rc genhtml_branch_coverage=1 00:10:15.760 --rc genhtml_function_coverage=1 00:10:15.760 --rc genhtml_legend=1 00:10:15.760 --rc geninfo_all_blocks=1 00:10:15.760 --rc geninfo_unexecuted_blocks=1 00:10:15.760 00:10:15.760 ' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.760 --rc genhtml_branch_coverage=1 00:10:15.760 --rc genhtml_function_coverage=1 00:10:15.760 --rc genhtml_legend=1 00:10:15.760 --rc geninfo_all_blocks=1 00:10:15.760 --rc geninfo_unexecuted_blocks=1 00:10:15.760 00:10:15.760 ' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.760 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.761 11:04:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.317 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:18.318 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:18.318 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:18.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:18.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:10:18.318 00:10:18.318 --- 10.0.0.2 ping statistics --- 00:10:18.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.318 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:18.318 00:10:18.318 --- 10.0.0.1 ping statistics --- 00:10:18.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.318 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=145865 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 145865 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 145865 ']' 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.318 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.319 [2024-11-17 11:04:42.679163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:18.319 [2024-11-17 11:04:42.679241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.319 [2024-11-17 11:04:42.756016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.319 [2024-11-17 11:04:42.803842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.319 [2024-11-17 11:04:42.803906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.319 [2024-11-17 11:04:42.803930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.319 [2024-11-17 11:04:42.803942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.319 [2024-11-17 11:04:42.803954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.319 [2024-11-17 11:04:42.804598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.319 [2024-11-17 11:04:42.944116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.319 [2024-11-17 11:04:42.960355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.319 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.582 malloc0 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:18.582 { 00:10:18.582 "params": { 00:10:18.582 "name": "Nvme$subsystem", 00:10:18.582 "trtype": "$TEST_TRANSPORT", 00:10:18.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.582 "adrfam": "ipv4", 00:10:18.582 "trsvcid": "$NVMF_PORT", 00:10:18.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.582 "hdgst": ${hdgst:-false}, 00:10:18.582 "ddgst": ${ddgst:-false} 00:10:18.582 }, 00:10:18.582 "method": "bdev_nvme_attach_controller" 00:10:18.582 } 00:10:18.582 EOF 00:10:18.582 )") 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:18.582 11:04:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:18.582 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:18.582 11:04:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:18.582 "params": { 00:10:18.582 "name": "Nvme1", 00:10:18.582 "trtype": "tcp", 00:10:18.582 "traddr": "10.0.0.2", 00:10:18.582 "adrfam": "ipv4", 00:10:18.582 "trsvcid": "4420", 00:10:18.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.582 "hdgst": false, 00:10:18.582 "ddgst": false 00:10:18.582 }, 00:10:18.582 "method": "bdev_nvme_attach_controller" 00:10:18.582 }' 00:10:18.582 [2024-11-17 11:04:43.046719] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:18.582 [2024-11-17 11:04:43.046798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146001 ] 00:10:18.582 [2024-11-17 11:04:43.112825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.582 [2024-11-17 11:04:43.158088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.844 Running I/O for 10 seconds... 00:10:21.178 5779.00 IOPS, 45.15 MiB/s [2024-11-17T10:04:46.782Z] 5835.50 IOPS, 45.59 MiB/s [2024-11-17T10:04:47.725Z] 5847.33 IOPS, 45.68 MiB/s [2024-11-17T10:04:48.670Z] 5866.50 IOPS, 45.83 MiB/s [2024-11-17T10:04:49.608Z] 5877.40 IOPS, 45.92 MiB/s [2024-11-17T10:04:50.553Z] 5881.33 IOPS, 45.95 MiB/s [2024-11-17T10:04:51.493Z] 5883.43 IOPS, 45.96 MiB/s [2024-11-17T10:04:52.884Z] 5888.88 IOPS, 46.01 MiB/s [2024-11-17T10:04:53.828Z] 5886.00 IOPS, 45.98 MiB/s [2024-11-17T10:04:53.828Z] 5891.80 IOPS, 46.03 MiB/s 00:10:29.170 Latency(us) 00:10:29.170 [2024-11-17T10:04:53.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.170 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:29.170 Verification LBA range: start 0x0 length 0x1000 00:10:29.170 Nvme1n1 : 10.02 5894.36 46.05 0.00 0.00 21658.65 4563.25 29321.29 00:10:29.170 [2024-11-17T10:04:53.828Z] =================================================================================================================== 00:10:29.170 [2024-11-17T10:04:53.829Z] Total : 5894.36 46.05 0.00 0.00 21658.65 4563.25 29321.29 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147210 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:29.171 { 00:10:29.171 "params": { 00:10:29.171 "name": "Nvme$subsystem", 00:10:29.171 "trtype": "$TEST_TRANSPORT", 00:10:29.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.171 "adrfam": "ipv4", 00:10:29.171 "trsvcid": "$NVMF_PORT", 00:10:29.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.171 "hdgst": ${hdgst:-false}, 00:10:29.171 "ddgst": ${ddgst:-false} 00:10:29.171 }, 00:10:29.171 "method": "bdev_nvme_attach_controller" 00:10:29.171 } 00:10:29.171 EOF 00:10:29.171 )") 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:29.171 [2024-11-17 11:04:53.694756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.694820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:29.171 11:04:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:29.171 "params": { 00:10:29.171 "name": "Nvme1", 00:10:29.171 "trtype": "tcp", 00:10:29.171 "traddr": "10.0.0.2", 00:10:29.171 "adrfam": "ipv4", 00:10:29.171 "trsvcid": "4420", 00:10:29.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.171 "hdgst": false, 00:10:29.171 "ddgst": false 00:10:29.171 }, 00:10:29.171 "method": "bdev_nvme_attach_controller" 00:10:29.171 }' 00:10:29.171 [2024-11-17 11:04:53.702694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.702719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.710715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.710747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.718736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.718758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.726758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.726781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.730120] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:29.171 [2024-11-17 11:04:53.730189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147210 ] 00:10:29.171 [2024-11-17 11:04:53.734780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.734822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.742815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.742836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.750837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.750857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.758865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.758899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.766890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.766910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.774904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.774924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.782925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.782945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.790942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.790962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.797663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.171 [2024-11-17 11:04:53.798962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.798982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.807041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.807085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.815046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.815085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.171 [2024-11-17 11:04:53.823029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.171 [2024-11-17 11:04:53.823050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.831051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.831073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.839071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.839092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.846444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.434 [2024-11-17 11:04:53.847092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.847112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.855114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.855134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.863175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.863210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.871205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.871248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.879227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.879273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.887252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.887296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.895273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.895317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.903294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.903337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.911298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.911337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.919291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.919313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.927352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.927394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.935374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.935417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.943362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.943386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.951375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.951396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.959409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.959436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.967422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.967445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.975464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.975487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.983469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.983493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.991487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.991539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:53.999529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:53.999551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.007554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.007576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.015577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.015599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.023604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.023627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.031609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.031632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.039632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.039656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.047655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.047677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.055676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.055698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.063699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.063720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.071721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.071742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.079747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.079772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.434 [2024-11-17 11:04:54.087767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.434 [2024-11-17 11:04:54.087789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.095789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.095826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.103828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.103849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.111850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.111885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.119865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.119889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.127900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.127922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.135921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.135957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.143942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.143968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.151948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.151968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.159970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.159991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.167993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.168014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.176019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.176041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.217721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.217749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.224155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.224178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 Running I/O for 5 seconds... 00:10:29.696 [2024-11-17 11:04:54.232174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.232195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.246442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.246472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.257325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.257355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.268154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.268198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.278953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.278981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.289607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.289635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.300486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.300515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.313488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.313516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.323487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.323516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.334227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.334254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.696 [2024-11-17 11:04:54.344891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.696 [2024-11-17 11:04:54.344919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.355603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.355631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.369454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.369489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.379476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.379504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.390488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.390540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.401226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.401253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.411972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.412013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.422991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.423019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.433752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.433781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.446492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.446543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.456874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.456902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.467975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.468004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.480672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.480699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.490592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.490619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.501027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.501055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.511827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.511854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.524131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.524159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.533783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.533811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.544547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.956 [2024-11-17 11:04:54.544576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.956 [2024-11-17 11:04:54.555972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.957 [2024-11-17 11:04:54.556014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.957 [2024-11-17 11:04:54.566635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.957 [2024-11-17 11:04:54.566664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.957 [2024-11-17 11:04:54.578997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.957 [2024-11-17 11:04:54.579025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.957 [2024-11-17 11:04:54.589260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.957 [2024-11-17 11:04:54.589288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.957 [2024-11-17 11:04:54.599774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.957 [2024-11-17 11:04:54.599802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.957 [2024-11-17 11:04:54.610570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.957 [2024-11-17 11:04:54.610598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.621424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.621452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.633910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.633937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.644002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.644030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.654926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.654955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.668548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.668575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.678753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.678780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.689493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.689520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.701716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.701745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.711249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.711277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.722952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.722981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.735189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.735217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.743832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.743860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.756734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.756762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.766861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.766888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.777806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.777833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.790443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.790471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.800537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.800566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.811772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.811801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.824337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.824366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.834465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.834494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.844708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.844736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.855303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.855331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.217 [2024-11-17 11:04:54.865908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.217 [2024-11-17 11:04:54.865936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.479 [2024-11-17 11:04:54.878890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.479 [2024-11-17 11:04:54.878919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.479 [2024-11-17 11:04:54.889111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.479 [2024-11-17 11:04:54.889139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.899500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.899536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.912594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.912622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.922855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.922882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.933047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.933076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.943967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.943995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.956815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.956843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.967060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.967088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.977767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.977796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:54.990247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:54.990276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.002078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.002106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.011117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.011145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.022599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.022627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.032963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.032991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.043299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.043327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.053980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.054023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.064791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.064819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.075359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.075386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.087944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.087971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.099513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.099549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.108846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.108874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.120664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.120693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.480 [2024-11-17 11:04:55.131141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.480 [2024-11-17 11:04:55.131169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.141770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.141798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.154114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.154142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.162996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.163040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.175927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.175954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.185827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.185854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.196227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.196265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.206997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.207024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.217741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.217769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.228285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.228313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 11802.00 IOPS, 92.20 MiB/s [2024-11-17T10:04:55.399Z] [2024-11-17 11:04:55.239588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.239616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.252332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.252360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.262035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.262063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.273136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.273164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.285963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.285990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.296200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.296229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.306476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.306504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.317452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.317480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.330252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.330281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.339843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.339871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.350855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.350883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.361130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.361174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.371844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.371870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.382774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.382802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.741 [2024-11-17 11:04:55.393673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.741 [2024-11-17 11:04:55.393707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.406515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.406563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.416790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.416818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.427502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.427554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.437844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.437871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.448600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.448628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.461201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.461229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.472653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.472681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.481584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.481612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.493116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.493144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.505383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.505410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.515119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.515147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.526110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.526138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.538853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.538880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.549012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.549054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.559644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.559673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.570325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.570353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.580969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.580997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.591483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.591512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.602326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.602353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.613139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.613176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.623694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.623722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.636014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.636041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.645542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.645570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.004 [2024-11-17 11:04:55.656033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.004 [2024-11-17 11:04:55.656061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-17 11:04:55.667023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-17 11:04:55.667051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.265 [2024-11-17 11:04:55.679149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.265 [2024-11-17 11:04:55.679177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.689285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.689314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.700418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.700446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.712987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.713015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.723168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.723196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.733443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.733470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.744123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.744150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.755147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.755173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.765323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.765350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.775834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.775861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.786704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.786733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.797325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.797353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.808214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.808242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.818614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.818652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.829193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.829222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.839810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.839838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.852721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.852748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.862789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.862817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.873198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.873225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.883625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.883653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.894461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.894489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.905308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.905352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.266 [2024-11-17 11:04:55.918009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.266 [2024-11-17 11:04:55.918038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.527 [2024-11-17 11:04:55.928110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.527 [2024-11-17 11:04:55.928139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.527 [2024-11-17 11:04:55.938820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.527 [2024-11-17 11:04:55.938862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.527 [2024-11-17 11:04:55.949396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.527 [2024-11-17 11:04:55.949424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.527 [2024-11-17 11:04:55.960169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:55.960196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:55.970721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:55.970749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:55.981155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:55.981183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:55.991688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:55.991717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.001853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.001881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.012253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.012283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.022906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.022934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.033354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.033382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.043615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.043642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.054459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.054487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.067020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.067049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.076636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.076664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.088025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.088054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.098865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.098893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.109557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.109594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.121776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.121804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.131725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.131754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.142060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.142088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.152782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.152810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.166366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.166394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.528 [2024-11-17 11:04:56.176491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.528 [2024-11-17 11:04:56.176519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.186819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.186848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.197318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.197346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.207989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.208018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.218550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.218578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.229220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.229248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 11881.00 IOPS, 92.82 MiB/s [2024-11-17T10:04:56.446Z] [2024-11-17 11:04:56.239901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.239930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.250444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.250473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.263206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.263234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.274831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.788 [2024-11-17 11:04:56.274859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.788 [2024-11-17 11:04:56.283531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.283559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.295169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.295197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.305283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.305311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.315592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.315619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.326609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.326636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.340227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.340254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.352172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.352202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.362161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.362188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.372910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.372937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.385711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.385740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.396071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.396108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.406566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.406598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.417307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.417336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.427422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.427458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.789 [2024-11-17 11:04:56.438249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.789 [2024-11-17 11:04:56.438277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.451292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.451320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.461737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.461764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.472484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.472512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.484921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.484948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.494492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.494543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.504997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.505040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.517405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.517432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.527319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.527349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.537734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.537762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.550023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.550050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.558786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.558813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.572619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.572648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.583088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.583116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.593456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.593484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.604014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.604056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.614561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.614590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.625314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.625358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.636051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.636087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.649189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.649217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.659430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.659472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.669767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.669794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.680380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.680407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.690854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.690882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.051 [2024-11-17 11:04:56.701350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.051 [2024-11-17 11:04:56.701379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.712012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.712041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.724965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.724992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.735025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.735053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.745475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.745503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.755885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.755913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.766797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.766825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.777593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.777621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.788222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.788250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.799309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.799336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.809721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.809749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.820155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.820182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.313 [2024-11-17 11:04:56.830228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.313 [2024-11-17 11:04:56.830256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.841097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.841134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.852250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.852278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.863319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.863347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.875799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.875828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.885863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.885890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.896408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.896436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.906849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.906876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.917349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.917377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.928227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.928255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.939126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.939152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.949461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.949488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.314 [2024-11-17 11:04:56.960314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.314 [2024-11-17 11:04:56.960341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:56.971316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:56.971345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:56.982247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:56.982275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:56.993150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:56.993178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.004091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.004119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.014844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.014872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.025502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.025538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.038193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.038220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.048423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.048458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.059157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.059184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.072040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.072067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.083827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.083854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.093008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.093037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.104244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.104271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.116472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.116499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.125851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.125879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.137618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.137646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.148647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.148674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.159460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.159488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.170129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.170156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.181042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.181069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.192001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.192028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.202764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.202793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.215096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.215123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.577 [2024-11-17 11:04:57.223929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.577 [2024-11-17 11:04:57.223955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.235254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.235282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 11905.00 IOPS, 93.01 MiB/s [2024-11-17T10:04:57.498Z] [2024-11-17 11:04:57.245874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.245902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.256975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.257002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.267351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.267378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.277807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.277849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.289000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.289027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.299626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.299653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.312117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.312145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.321999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.322027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.332687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.332715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.344065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.344093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.355909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.355938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.366156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.366184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.377043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.377072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.387928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.840 [2024-11-17 11:04:57.387956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.840 [2024-11-17 11:04:57.398609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.398637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.411213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.411241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.421121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.421150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.431913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.431940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.442632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.442660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.453029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.453058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.465306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.465334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.474764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.474792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.841 [2024-11-17 11:04:57.485627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.841 [2024-11-17 11:04:57.485656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.496202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.496231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.507266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.507294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.517855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.517883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.530184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.530214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.539848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.539876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.550378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.550406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.561346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.561373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.573958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.573986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.584175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.584202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.594791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.594819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.605471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.605500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.616293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.616320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.628633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.628660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.638575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.638603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.102 [2024-11-17 11:04:57.648926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.102 [2024-11-17 11:04:57.648953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.659643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.659671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.670653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.670688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.680859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.680886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.691396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.691423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.701868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.701896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.712404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.712432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.723087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.723115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.733599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.733627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.744259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.744286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.103 [2024-11-17 11:04:57.754710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.103 [2024-11-17 11:04:57.754740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.765108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.765136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.775730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.775759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.786474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.786501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.797223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.797251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.808293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.808320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.818907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.818935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.831665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.831693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.841590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.841616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.852315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.852342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.863009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.863047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.873700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.873728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.365 [2024-11-17 11:04:57.886583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.365 [2024-11-17 11:04:57.886612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.896388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.896415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.906932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.906960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.917578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.917605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.928200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.928228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.938991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.939019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.949576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.949603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.961608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.961636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.971802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.971830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.982219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.982248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:57.993096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:57.993124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:58.003720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:58.003749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.366 [2024-11-17 11:04:58.016745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.366 [2024-11-17 11:04:58.016774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.026897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.026925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.037554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.037583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.050041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.050069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.059887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.059915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.070641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.070680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.083011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.083039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.094579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.094607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.103361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.103388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.114694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.114721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.126892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.126919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.135911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.135939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.146740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.146768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.157167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.157194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.167754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.167791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.178483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.178510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.191501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.191539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.201319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.201345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.211848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.628 [2024-11-17 11:04:58.211876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.628 [2024-11-17 11:04:58.222638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.629 [2024-11-17 11:04:58.222665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.629 [2024-11-17 11:04:58.233630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.629 [2024-11-17 11:04:58.233662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.629 11933.00 IOPS, 93.23 MiB/s [2024-11-17T10:04:58.287Z] [2024-11-17 11:04:58.246354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.629 [2024-11-17 11:04:58.246383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.629 [2024-11-17 11:04:58.256569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.629 [2024-11-17 11:04:58.256602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.629 [2024-11-17 11:04:58.267411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.629 [2024-11-17 11:04:58.267438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.629 [2024-11-17 11:04:58.279919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.629 [2024-11-17 11:04:58.279956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.291411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.291440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.300406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.300433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.311833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.311876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.322399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.322426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.333100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.333127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.346388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.346429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.356708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.356736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.367166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.367193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.377986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.378014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.390707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.390736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.400710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.400740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.411797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.411839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.424058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.424085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.433347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.433375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.444337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.444364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.454719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.454747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.465389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.465416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.475981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.476008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.486420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.486447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.497019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.497048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.507499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.507533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.517971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.518000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.528781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.528808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.892 [2024-11-17 11:04:58.539657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.892 [2024-11-17 11:04:58.539685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.550888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.550917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.561637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.561666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.572631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.572660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.583377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.583405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.594602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.594630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.605220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.605247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.615993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.616020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.628505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.628541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.640062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.640104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.649197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.649224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.660231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.660258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.671663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.671691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.681468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.681496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.691968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.691996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.703124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.703151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.716551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.716579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.728772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.728801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.738275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.738303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.749568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.749597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.760037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.760065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.770761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.770789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.784589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.784632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.794869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.794896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.154 [2024-11-17 11:04:58.805388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.154 [2024-11-17 11:04:58.805417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.816068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.816096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.826503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.826540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.837422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.837450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.850322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.850349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.860463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.860490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.871312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.871340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.883662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.883690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.893664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.893691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.913291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.913320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.923429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.923456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.933915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.933943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.944863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.944891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.955759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.955787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.968419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.968448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.978623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.978652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.988814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.988841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:58.999237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:58.999264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:59.009722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:59.009750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:59.020197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:59.020225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:59.030761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:59.030789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:59.041331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:59.041358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:59.054193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:59.054220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.416 [2024-11-17 11:04:59.064376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.416 [2024-11-17 11:04:59.064404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.075297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.075325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.088374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.088401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.098519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.098554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.109039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.109067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.119658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.119685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.131910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.131938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.141852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.141880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.152080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.152122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.162921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.162949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.175706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.175734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.185449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.185476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.195505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.195544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.205743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.205770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.216004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.216032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.227133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.227161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.237625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.237653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 11940.80 IOPS, 93.29 MiB/s [2024-11-17T10:04:59.335Z] [2024-11-17 11:04:59.247500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.247536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.252729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.252756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 00:10:34.677 Latency(us) 00:10:34.677 [2024-11-17T10:04:59.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:34.677 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:34.677 Nvme1n1 : 5.01 11943.19 93.31 0.00 0.00 10703.30 4636.07 23204.60 00:10:34.677 [2024-11-17T10:04:59.335Z] =================================================================================================================== 00:10:34.677 [2024-11-17T10:04:59.335Z] Total : 11943.19 93.31 0.00 0.00 10703.30 4636.07 23204.60 00:10:34.677 [2024-11-17 11:04:59.260755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.260796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.268781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.268823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.276870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.276921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.284884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.284934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.292908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.292965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.300929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.300983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.308960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.309018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.316981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.317038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.677 [2024-11-17 11:04:59.324984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.677 [2024-11-17 11:04:59.325033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.333014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.333065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.341041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.341092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.349071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.349124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.357085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.357137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.365119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.365179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.373124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.373180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.381149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.381204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.389115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.389144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.397135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.397164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.405227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.405287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.413230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.413281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.421204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.421247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.429202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.429222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 [2024-11-17 11:04:59.437221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.940 [2024-11-17 11:04:59.437243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147210) - No such process 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147210 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.940 delay0 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.940 11:04:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:35.200 [2024-11-17 11:04:59.597721] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:41.787 [2024-11-17 11:05:05.774693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d5a0 is same with the state(6) to be set 00:10:41.787 Initializing NVMe Controllers 00:10:41.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:41.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:41.787 Initialization complete. Launching workers. 00:10:41.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 58 00:10:41.787 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 345, failed to submit 33 00:10:41.787 success 161, unsuccessful 184, failed 0 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.787 rmmod nvme_tcp 00:10:41.787 rmmod nvme_fabrics 00:10:41.787 rmmod nvme_keyring 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 145865 ']' 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 145865 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 145865 ']' 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 145865 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 145865 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 145865' 00:10:41.787 killing process with pid 145865 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 145865 00:10:41.787 11:05:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 145865 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.787 11:05:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.699 00:10:43.699 real 0m27.974s 00:10:43.699 user 0m42.036s 00:10:43.699 sys 0m7.449s 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.699 ************************************ 00:10:43.699 END TEST nvmf_zcopy 00:10:43.699 ************************************ 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.699 ************************************ 00:10:43.699 START TEST nvmf_nmic 00:10:43.699 ************************************ 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.699 * Looking for test storage... 00:10:43.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.699 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.700 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.960 11:05:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.500 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.500 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.500 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:10:46.501 00:10:46.501 --- 10.0.0.2 ping statistics --- 00:10:46.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.501 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:10:46.501 00:10:46.501 --- 10.0.0.1 ping statistics --- 00:10:46.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.501 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=150610 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 150610 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 150610 ']' 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.501 11:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.501 [2024-11-17 11:05:10.790111] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:46.501 [2024-11-17 11:05:10.790191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.501 [2024-11-17 11:05:10.866287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.501 [2024-11-17 11:05:10.917532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.501 [2024-11-17 11:05:10.917610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.501 [2024-11-17 11:05:10.917623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.501 [2024-11-17 11:05:10.917634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.501 [2024-11-17 11:05:10.917654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.501 [2024-11-17 11:05:10.919246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.501 [2024-11-17 11:05:10.919282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.501 [2024-11-17 11:05:10.919345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.501 [2024-11-17 11:05:10.919347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.501 [2024-11-17 11:05:11.062672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.501 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.501 Malloc0 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 [2024-11-17 11:05:11.126558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:46.502 test case1: single bdev can't be used in multiple subsystems 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.502 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.502 [2024-11-17 11:05:11.150359] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:46.502 [2024-11-17 11:05:11.150389] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:46.502 [2024-11-17 11:05:11.150403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.502 request: 00:10:46.502 { 00:10:46.502 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:46.502 "namespace": { 00:10:46.502 "bdev_name": "Malloc0", 00:10:46.502 "no_auto_visible": false 00:10:46.502 }, 00:10:46.502 "method": "nvmf_subsystem_add_ns", 00:10:46.502 "req_id": 1 00:10:46.765 } 00:10:46.765 Got JSON-RPC error response 00:10:46.765 response: 00:10:46.765 { 00:10:46.765 "code": -32602, 00:10:46.765 "message": "Invalid parameters" 00:10:46.765 } 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:46.765 Adding namespace failed - expected result. 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:46.765 test case2: host connect to nvmf target in multiple paths 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:46.765 [2024-11-17 11:05:11.162491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.765 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:47.336 11:05:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:47.908 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:47.908 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:47.908 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.908 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:47.908 11:05:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:49.816 11:05:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:49.816 [global] 00:10:49.816 thread=1 00:10:49.816 invalidate=1 00:10:49.816 rw=write 00:10:49.816 time_based=1 00:10:49.816 runtime=1 00:10:49.816 ioengine=libaio 00:10:49.816 direct=1 00:10:49.816 bs=4096 00:10:49.816 iodepth=1 00:10:49.816 norandommap=0 00:10:49.816 numjobs=1 00:10:49.816 00:10:49.816 verify_dump=1 00:10:49.816 verify_backlog=512 00:10:49.816 verify_state_save=0 00:10:49.816 do_verify=1 00:10:49.816 verify=crc32c-intel 00:10:49.816 [job0] 00:10:49.816 filename=/dev/nvme0n1 00:10:49.816 Could not set queue depth (nvme0n1) 00:10:50.382 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:50.382 fio-3.35 00:10:50.382 Starting 1 thread 00:10:51.769 00:10:51.769 job0: (groupid=0, jobs=1): err= 0: pid=151253: Sun Nov 17 11:05:16 2024 00:10:51.769 read: IOPS=22, BW=91.2KiB/s (93.4kB/s)(92.0KiB/1009msec) 00:10:51.769 slat (nsec): min=14414, max=36399, avg=25709.30, stdev=8548.32 00:10:51.769 clat (usec): min=264, max=41395, avg=39211.65, stdev=8490.89 00:10:51.769 lat (usec): min=283, max=41413, avg=39237.36, stdev=8492.34 00:10:51.769 clat percentiles (usec): 00:10:51.769 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:51.769 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:51.769 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:51.769 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:51.769 | 99.99th=[41157] 00:10:51.769 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:51.769 slat (nsec): min=5840, max=59630, avg=17976.56, stdev=9379.57 00:10:51.769 clat (usec): min=129, max=408, avg=185.33, stdev=35.03 00:10:51.769 lat (usec): min=137, max=427, avg=203.30, stdev=35.94 00:10:51.769 clat percentiles (usec): 00:10:51.769 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 161], 00:10:51.769 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:51.770 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 241], 95.00th=[ 255], 00:10:51.770 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 408], 99.95th=[ 408], 00:10:51.770 | 99.99th=[ 408] 00:10:51.770 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:51.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:51.770 lat (usec) : 250=89.53%, 500=6.36% 00:10:51.770 lat (msec) : 50=4.11% 00:10:51.770 cpu : usr=0.79%, sys=0.99%, ctx=535, majf=0, minf=1 00:10:51.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.770 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.770 00:10:51.770 Run status group 0 (all jobs): 00:10:51.770 READ: bw=91.2KiB/s (93.4kB/s), 91.2KiB/s-91.2KiB/s (93.4kB/s-93.4kB/s), io=92.0KiB (94.2kB), run=1009-1009msec 00:10:51.770 WRITE: bw=2030KiB/s (2078kB/s), 2030KiB/s-2030KiB/s (2078kB/s-2078kB/s), io=2048KiB (2097kB), run=1009-1009msec 00:10:51.770 00:10:51.770 Disk stats (read/write): 00:10:51.770 nvme0n1: ios=70/512, merge=0/0, ticks=806/66, in_queue=872, util=91.58% 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.770 rmmod nvme_tcp 00:10:51.770 rmmod nvme_fabrics 00:10:51.770 rmmod nvme_keyring 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 150610 ']' 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 150610 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 150610 ']' 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 150610 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150610 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150610' 00:10:51.770 killing process with pid 150610 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 150610 00:10:51.770 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 150610 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.031 11:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.942 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.942 00:10:53.942 real 0m10.375s 00:10:53.942 user 0m23.377s 00:10:53.942 sys 0m2.862s 00:10:53.942 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.942 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.942 ************************************ 00:10:53.942 END TEST nvmf_nmic 00:10:53.942 ************************************ 00:10:53.943 11:05:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.943 11:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.943 11:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.943 11:05:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.203 ************************************ 00:10:54.203 START TEST nvmf_fio_target 00:10:54.203 ************************************ 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:54.203 * Looking for test storage... 00:10:54.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.203 --rc genhtml_branch_coverage=1 00:10:54.203 --rc genhtml_function_coverage=1 00:10:54.203 --rc genhtml_legend=1 00:10:54.203 --rc geninfo_all_blocks=1 00:10:54.203 --rc geninfo_unexecuted_blocks=1 00:10:54.203 00:10:54.203 ' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.203 --rc genhtml_branch_coverage=1 00:10:54.203 --rc genhtml_function_coverage=1 00:10:54.203 --rc genhtml_legend=1 00:10:54.203 --rc geninfo_all_blocks=1 00:10:54.203 --rc geninfo_unexecuted_blocks=1 00:10:54.203 00:10:54.203 ' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.203 --rc genhtml_branch_coverage=1 00:10:54.203 --rc genhtml_function_coverage=1 00:10:54.203 --rc genhtml_legend=1 00:10:54.203 --rc geninfo_all_blocks=1 00:10:54.203 --rc geninfo_unexecuted_blocks=1 00:10:54.203 00:10:54.203 ' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:54.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.203 --rc genhtml_branch_coverage=1 00:10:54.203 --rc genhtml_function_coverage=1 00:10:54.203 --rc genhtml_legend=1 00:10:54.203 --rc geninfo_all_blocks=1 00:10:54.203 --rc geninfo_unexecuted_blocks=1 00:10:54.203 00:10:54.203 ' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.203 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.204 11:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:56.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:56.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.760 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:56.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:56.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.761 11:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:10:56.761 00:10:56.761 --- 10.0.0.2 ping statistics --- 00:10:56.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.761 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:56.761 00:10:56.761 --- 10.0.0.1 ping statistics --- 00:10:56.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.761 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=153333 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 153333 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 153333 ']' 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.761 [2024-11-17 11:05:21.132554] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:10:56.761 [2024-11-17 11:05:21.132647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.761 [2024-11-17 11:05:21.211657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.761 [2024-11-17 11:05:21.260931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.761 [2024-11-17 11:05:21.260983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.761 [2024-11-17 11:05:21.260997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.761 [2024-11-17 11:05:21.261008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.761 [2024-11-17 11:05:21.261018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.761 [2024-11-17 11:05:21.262466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.761 [2024-11-17 11:05:21.262555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.761 [2024-11-17 11:05:21.262491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.761 [2024-11-17 11:05:21.262559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:56.761 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.762 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.762 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.762 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.762 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:57.329 [2024-11-17 11:05:21.716428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.329 11:05:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.588 11:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:57.588 11:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.847 11:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:57.847 11:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.106 11:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:58.106 11:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.365 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:58.365 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:58.623 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.191 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:59.191 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.451 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:59.451 11:05:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:59.710 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:59.710 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:59.968 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.226 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.226 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.484 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:00.484 11:05:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.742 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.001 [2024-11-17 11:05:25.449093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.001 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:01.259 11:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:01.518 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.088 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:02.088 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.088 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.088 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:02.088 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:02.088 11:05:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:04.627 11:05:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:04.627 [global] 00:11:04.627 thread=1 00:11:04.627 invalidate=1 00:11:04.627 rw=write 00:11:04.627 time_based=1 00:11:04.627 runtime=1 00:11:04.627 ioengine=libaio 00:11:04.627 direct=1 00:11:04.627 bs=4096 00:11:04.627 iodepth=1 00:11:04.627 norandommap=0 00:11:04.627 numjobs=1 00:11:04.627 00:11:04.627 verify_dump=1 00:11:04.627 verify_backlog=512 00:11:04.627 verify_state_save=0 00:11:04.627 do_verify=1 00:11:04.627 verify=crc32c-intel 00:11:04.627 [job0] 00:11:04.627 filename=/dev/nvme0n1 00:11:04.627 [job1] 00:11:04.627 filename=/dev/nvme0n2 00:11:04.627 [job2] 00:11:04.627 filename=/dev/nvme0n3 00:11:04.627 [job3] 00:11:04.627 filename=/dev/nvme0n4 00:11:04.627 Could not set queue depth (nvme0n1) 00:11:04.627 Could not set queue depth (nvme0n2) 00:11:04.627 Could not set queue depth (nvme0n3) 00:11:04.627 Could not set queue depth (nvme0n4) 00:11:04.627 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.627 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.627 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.627 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:04.627 fio-3.35 00:11:04.627 Starting 4 threads 00:11:05.567 00:11:05.567 job0: (groupid=0, jobs=1): err= 0: pid=154421: Sun Nov 17 11:05:30 2024 00:11:05.567 read: IOPS=28, BW=113KiB/s (116kB/s)(116KiB/1027msec) 00:11:05.567 slat (nsec): min=6575, max=34557, avg=21701.59, stdev=8643.53 00:11:05.567 clat (usec): min=238, max=41993, avg=31363.90, stdev=17849.50 00:11:05.567 lat (usec): min=244, max=42025, avg=31385.60, stdev=17853.37 00:11:05.567 clat percentiles (usec): 00:11:05.567 | 1.00th=[ 239], 5.00th=[ 251], 10.00th=[ 285], 20.00th=[ 297], 00:11:05.567 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:05.567 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:05.567 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.567 | 99.99th=[42206] 00:11:05.567 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:11:05.567 slat (nsec): min=6727, max=40159, avg=11281.99, stdev=5635.79 00:11:05.567 clat (usec): min=167, max=3218, avg=213.84, stdev=137.13 00:11:05.567 lat (usec): min=175, max=3226, avg=225.13, stdev=137.31 00:11:05.567 clat percentiles (usec): 00:11:05.567 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 184], 00:11:05.567 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:11:05.567 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 258], 95.00th=[ 281], 00:11:05.567 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 3228], 99.95th=[ 3228], 00:11:05.567 | 99.99th=[ 3228] 00:11:05.568 bw ( KiB/s): min= 4096, max= 4096, per=41.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.568 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.568 lat (usec) : 250=82.62%, 500=13.12% 00:11:05.568 lat (msec) : 4=0.18%, 50=4.07% 00:11:05.568 cpu : usr=0.19%, sys=0.68%, ctx=542, majf=0, minf=1 00:11:05.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.568 job1: (groupid=0, jobs=1): err= 0: pid=154422: Sun Nov 17 11:05:30 2024 00:11:05.568 read: IOPS=246, BW=987KiB/s (1011kB/s)(988KiB/1001msec) 00:11:05.568 slat (nsec): min=5508, max=35306, avg=12428.61, stdev=5781.73 00:11:05.568 clat (usec): min=176, max=41977, avg=3558.33, stdev=11206.80 00:11:05.568 lat (usec): min=182, max=42011, avg=3570.76, stdev=11209.88 00:11:05.568 clat percentiles (usec): 00:11:05.568 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:11:05.568 | 30.00th=[ 221], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 255], 00:11:05.568 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[41157], 00:11:05.568 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.568 | 99.99th=[42206] 00:11:05.568 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:05.568 slat (nsec): min=6210, max=55582, avg=12269.81, stdev=8031.25 00:11:05.568 clat (usec): min=152, max=911, avg=213.57, stdev=50.11 00:11:05.568 lat (usec): min=161, max=920, avg=225.84, stdev=51.73 00:11:05.568 clat percentiles (usec): 00:11:05.568 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:11:05.568 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 210], 00:11:05.568 | 70.00th=[ 225], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 281], 00:11:05.568 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 914], 99.95th=[ 914], 00:11:05.568 | 99.99th=[ 914] 00:11:05.568 bw ( KiB/s): min= 4096, max= 4096, per=41.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.568 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.568 lat (usec) : 250=75.10%, 500=22.13%, 1000=0.13% 00:11:05.568 lat (msec) : 50=2.64% 00:11:05.568 cpu : usr=0.70%, sys=0.80%, ctx=761, majf=0, minf=1 00:11:05.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 issued rwts: total=247,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.568 job2: (groupid=0, jobs=1): err= 0: pid=154423: Sun Nov 17 11:05:30 2024 00:11:05.568 read: IOPS=523, BW=2093KiB/s (2143kB/s)(2116KiB/1011msec) 00:11:05.568 slat (nsec): min=5774, max=36165, avg=11758.43, stdev=6375.85 00:11:05.568 clat (usec): min=199, max=41990, avg=1486.30, stdev=7039.85 00:11:05.568 lat (usec): min=206, max=42005, avg=1498.05, stdev=7042.29 00:11:05.568 clat percentiles (usec): 00:11:05.568 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:11:05.568 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:11:05.568 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 306], 00:11:05.568 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.568 | 99.99th=[42206] 00:11:05.568 write: IOPS=1012, BW=4051KiB/s (4149kB/s)(4096KiB/1011msec); 0 zone resets 00:11:05.568 slat (nsec): min=7462, max=58572, avg=13378.75, stdev=6827.39 00:11:05.568 clat (usec): min=149, max=422, avg=194.92, stdev=30.22 00:11:05.568 lat (usec): min=158, max=433, avg=208.30, stdev=33.18 00:11:05.568 clat percentiles (usec): 00:11:05.568 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:11:05.568 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:11:05.568 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 227], 95.00th=[ 245], 00:11:05.568 | 99.00th=[ 318], 99.50th=[ 371], 99.90th=[ 404], 99.95th=[ 424], 00:11:05.568 | 99.99th=[ 424] 00:11:05.568 bw ( KiB/s): min= 8192, max= 8192, per=82.16%, avg=8192.00, stdev= 0.00, samples=1 00:11:05.568 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:05.568 lat (usec) : 250=85.13%, 500=13.84% 00:11:05.568 lat (msec) : 50=1.03% 00:11:05.568 cpu : usr=1.39%, sys=2.57%, ctx=1553, majf=0, minf=2 00:11:05.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.568 job3: (groupid=0, jobs=1): err= 0: pid=154424: Sun Nov 17 11:05:30 2024 00:11:05.568 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:11:05.568 slat (nsec): min=9303, max=36787, avg=23162.50, stdev=10143.91 00:11:05.568 clat (usec): min=40791, max=41992, avg=41185.88, stdev=422.84 00:11:05.568 lat (usec): min=40800, max=42015, avg=41209.05, stdev=426.01 00:11:05.568 clat percentiles (usec): 00:11:05.568 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:05.568 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:05.568 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:05.568 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:05.568 | 99.99th=[42206] 00:11:05.568 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:11:05.568 slat (nsec): min=8065, max=51490, avg=12962.74, stdev=6268.92 00:11:05.568 clat (usec): min=153, max=331, avg=187.43, stdev=21.31 00:11:05.568 lat (usec): min=163, max=366, avg=200.39, stdev=24.73 00:11:05.568 clat percentiles (usec): 00:11:05.568 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:11:05.568 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:11:05.568 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 227], 00:11:05.568 | 99.00th=[ 241], 99.50th=[ 281], 99.90th=[ 330], 99.95th=[ 330], 00:11:05.568 | 99.99th=[ 330] 00:11:05.568 bw ( KiB/s): min= 4096, max= 4096, per=41.08%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.568 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.568 lat (usec) : 250=95.13%, 500=0.75% 00:11:05.568 lat (msec) : 50=4.12% 00:11:05.568 cpu : usr=0.20%, sys=1.09%, ctx=535, majf=0, minf=1 00:11:05.568 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.568 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.568 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.568 00:11:05.568 Run status group 0 (all jobs): 00:11:05.568 READ: bw=3221KiB/s (3298kB/s), 87.0KiB/s-2093KiB/s (89.1kB/s-2143kB/s), io=3308KiB (3387kB), run=1001-1027msec 00:11:05.568 WRITE: bw=9971KiB/s (10.2MB/s), 1994KiB/s-4051KiB/s (2042kB/s-4149kB/s), io=10.0MiB (10.5MB), run=1001-1027msec 00:11:05.568 00:11:05.568 Disk stats (read/write): 00:11:05.568 nvme0n1: ios=75/512, merge=0/0, ticks=1301/106, in_queue=1407, util=97.70% 00:11:05.568 nvme0n2: ios=52/512, merge=0/0, ticks=1693/105, in_queue=1798, util=97.86% 00:11:05.568 nvme0n3: ios=548/1024, merge=0/0, ticks=721/191, in_queue=912, util=90.46% 00:11:05.568 nvme0n4: ios=75/512, merge=0/0, ticks=1066/94, in_queue=1160, util=97.99% 00:11:05.568 11:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:05.568 [global] 00:11:05.568 thread=1 00:11:05.568 invalidate=1 00:11:05.568 rw=randwrite 00:11:05.568 time_based=1 00:11:05.568 runtime=1 00:11:05.568 ioengine=libaio 00:11:05.568 direct=1 00:11:05.568 bs=4096 00:11:05.568 iodepth=1 00:11:05.568 norandommap=0 00:11:05.568 numjobs=1 00:11:05.568 00:11:05.827 verify_dump=1 00:11:05.827 verify_backlog=512 00:11:05.827 verify_state_save=0 00:11:05.827 do_verify=1 00:11:05.827 verify=crc32c-intel 00:11:05.827 [job0] 00:11:05.827 filename=/dev/nvme0n1 00:11:05.827 [job1] 00:11:05.827 filename=/dev/nvme0n2 00:11:05.827 [job2] 00:11:05.827 filename=/dev/nvme0n3 00:11:05.827 [job3] 00:11:05.827 filename=/dev/nvme0n4 00:11:05.827 Could not set queue depth (nvme0n1) 00:11:05.827 Could not set queue depth (nvme0n2) 00:11:05.827 Could not set queue depth (nvme0n3) 00:11:05.827 Could not set queue depth (nvme0n4) 00:11:05.827 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.827 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.827 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.827 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.827 fio-3.35 00:11:05.827 Starting 4 threads 00:11:07.209 00:11:07.209 job0: (groupid=0, jobs=1): err= 0: pid=154659: Sun Nov 17 11:05:31 2024 00:11:07.209 read: IOPS=166, BW=666KiB/s (682kB/s)(668KiB/1003msec) 00:11:07.209 slat (nsec): min=5709, max=61871, avg=13249.57, stdev=6160.12 00:11:07.209 clat (usec): min=197, max=42192, avg=5396.80, stdev=13673.36 00:11:07.209 lat (usec): min=209, max=42204, avg=5410.05, stdev=13676.50 00:11:07.209 clat percentiles (usec): 00:11:07.209 | 1.00th=[ 204], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:11:07.209 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:11:07.209 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[41157], 95.00th=[41157], 00:11:07.209 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.209 | 99.99th=[42206] 00:11:07.209 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:07.209 slat (nsec): min=5946, max=39610, avg=14404.88, stdev=4162.25 00:11:07.209 clat (usec): min=144, max=361, avg=174.24, stdev=16.70 00:11:07.209 lat (usec): min=158, max=375, avg=188.65, stdev=16.83 00:11:07.209 clat percentiles (usec): 00:11:07.209 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:11:07.209 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:11:07.209 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:11:07.209 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 363], 99.95th=[ 363], 00:11:07.209 | 99.99th=[ 363] 00:11:07.209 bw ( KiB/s): min= 4096, max= 4096, per=24.24%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.209 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.209 lat (usec) : 250=95.29%, 500=1.62% 00:11:07.209 lat (msec) : 50=3.09% 00:11:07.209 cpu : usr=0.20%, sys=1.20%, ctx=679, majf=0, minf=1 00:11:07.209 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.209 issued rwts: total=167,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.209 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.209 job1: (groupid=0, jobs=1): err= 0: pid=154669: Sun Nov 17 11:05:31 2024 00:11:07.209 read: IOPS=1376, BW=5507KiB/s (5640kB/s)(5524KiB/1003msec) 00:11:07.210 slat (nsec): min=5909, max=36810, avg=7748.07, stdev=2754.46 00:11:07.210 clat (usec): min=203, max=42033, avg=494.64, stdev=3121.13 00:11:07.210 lat (usec): min=210, max=42043, avg=502.38, stdev=3121.27 00:11:07.210 clat percentiles (usec): 00:11:07.210 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:11:07.210 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:11:07.210 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 302], 00:11:07.210 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:11:07.210 | 99.99th=[42206] 00:11:07.210 write: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec); 0 zone resets 00:11:07.210 slat (nsec): min=7351, max=76499, avg=9112.50, stdev=2285.69 00:11:07.210 clat (usec): min=131, max=544, avg=187.08, stdev=23.75 00:11:07.210 lat (usec): min=139, max=553, avg=196.19, stdev=24.09 00:11:07.210 clat percentiles (usec): 00:11:07.210 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 172], 00:11:07.210 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:11:07.210 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 223], 00:11:07.210 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 433], 99.95th=[ 545], 00:11:07.210 | 99.99th=[ 545] 00:11:07.210 bw ( KiB/s): min= 3160, max= 9128, per=36.36%, avg=6144.00, stdev=4220.01, samples=2 00:11:07.210 iops : min= 790, max= 2282, avg=1536.00, stdev=1055.00, samples=2 00:11:07.210 lat (usec) : 250=76.28%, 500=23.00%, 750=0.45% 00:11:07.210 lat (msec) : 50=0.27% 00:11:07.210 cpu : usr=2.00%, sys=3.39%, ctx=2917, majf=0, minf=1 00:11:07.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.210 issued rwts: total=1381,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.210 job2: (groupid=0, jobs=1): err= 0: pid=154699: Sun Nov 17 11:05:31 2024 00:11:07.210 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:07.210 slat (nsec): min=5699, max=59048, avg=15368.16, stdev=8861.46 00:11:07.210 clat (usec): min=182, max=41818, avg=400.27, stdev=2558.99 00:11:07.210 lat (usec): min=188, max=41829, avg=415.64, stdev=2558.87 00:11:07.210 clat percentiles (usec): 00:11:07.210 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:11:07.210 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:11:07.210 | 70.00th=[ 235], 80.00th=[ 262], 90.00th=[ 322], 95.00th=[ 375], 00:11:07.210 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[41681], 99.95th=[41681], 00:11:07.210 | 99.99th=[41681] 00:11:07.210 write: IOPS=1675, BW=6701KiB/s (6862kB/s)(6708KiB/1001msec); 0 zone resets 00:11:07.210 slat (nsec): min=6941, max=58881, avg=16734.38, stdev=5474.20 00:11:07.210 clat (usec): min=131, max=404, avg=190.15, stdev=32.85 00:11:07.210 lat (usec): min=140, max=422, avg=206.88, stdev=34.69 00:11:07.210 clat percentiles (usec): 00:11:07.210 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 157], 20.00th=[ 165], 00:11:07.210 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 194], 00:11:07.210 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 231], 95.00th=[ 247], 00:11:07.210 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 396], 99.95th=[ 404], 00:11:07.210 | 99.99th=[ 404] 00:11:07.210 bw ( KiB/s): min= 8256, max= 8256, per=48.86%, avg=8256.00, stdev= 0.00, samples=1 00:11:07.210 iops : min= 2064, max= 2064, avg=2064.00, stdev= 0.00, samples=1 00:11:07.210 lat (usec) : 250=87.05%, 500=12.76% 00:11:07.210 lat (msec) : 50=0.19% 00:11:07.210 cpu : usr=2.40%, sys=5.70%, ctx=3214, majf=0, minf=1 00:11:07.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.210 issued rwts: total=1536,1677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.210 job3: (groupid=0, jobs=1): err= 0: pid=154712: Sun Nov 17 11:05:31 2024 00:11:07.210 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:11:07.210 slat (nsec): min=14530, max=34394, avg=21515.24, stdev=8449.14 00:11:07.210 clat (usec): min=289, max=42026, avg=39360.86, stdev=8965.36 00:11:07.210 lat (usec): min=310, max=42058, avg=39382.38, stdev=8965.77 00:11:07.210 clat percentiles (usec): 00:11:07.210 | 1.00th=[ 289], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:07.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:07.210 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:07.210 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.210 | 99.99th=[42206] 00:11:07.210 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:07.210 slat (nsec): min=8045, max=75359, avg=25410.99, stdev=9051.09 00:11:07.210 clat (usec): min=180, max=470, avg=306.86, stdev=82.96 00:11:07.210 lat (usec): min=204, max=500, avg=332.27, stdev=84.78 00:11:07.210 clat percentiles (usec): 00:11:07.210 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 223], 00:11:07.210 | 30.00th=[ 237], 40.00th=[ 258], 50.00th=[ 302], 60.00th=[ 326], 00:11:07.210 | 70.00th=[ 363], 80.00th=[ 396], 90.00th=[ 433], 95.00th=[ 445], 00:11:07.210 | 99.00th=[ 461], 99.50th=[ 465], 99.90th=[ 469], 99.95th=[ 469], 00:11:07.210 | 99.99th=[ 469] 00:11:07.210 bw ( KiB/s): min= 4096, max= 4096, per=24.24%, avg=4096.00, stdev= 0.00, samples=1 00:11:07.210 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:07.210 lat (usec) : 250=35.27%, 500=60.98% 00:11:07.210 lat (msec) : 50=3.75% 00:11:07.210 cpu : usr=0.70%, sys=1.90%, ctx=533, majf=0, minf=1 00:11:07.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.210 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.210 00:11:07.210 Run status group 0 (all jobs): 00:11:07.210 READ: bw=12.1MiB/s (12.7MB/s), 83.9KiB/s-6138KiB/s (85.9kB/s-6285kB/s), io=12.1MiB (12.7MB), run=1001-1003msec 00:11:07.210 WRITE: bw=16.5MiB/s (17.3MB/s), 2042KiB/s-6701KiB/s (2091kB/s-6862kB/s), io=16.6MiB (17.4MB), run=1001-1003msec 00:11:07.210 00:11:07.210 Disk stats (read/write): 00:11:07.210 nvme0n1: ios=207/512, merge=0/0, ticks=751/91, in_queue=842, util=86.57% 00:11:07.210 nvme0n2: ios=1259/1536, merge=0/0, ticks=511/270, in_queue=781, util=87.60% 00:11:07.210 nvme0n3: ios=1097/1536, merge=0/0, ticks=1412/275, in_queue=1687, util=99.37% 00:11:07.210 nvme0n4: ios=64/512, merge=0/0, ticks=774/142, in_queue=916, util=95.57% 00:11:07.210 11:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:07.210 [global] 00:11:07.210 thread=1 00:11:07.210 invalidate=1 00:11:07.210 rw=write 00:11:07.210 time_based=1 00:11:07.210 runtime=1 00:11:07.210 ioengine=libaio 00:11:07.210 direct=1 00:11:07.210 bs=4096 00:11:07.210 iodepth=128 00:11:07.210 norandommap=0 00:11:07.210 numjobs=1 00:11:07.210 00:11:07.210 verify_dump=1 00:11:07.210 verify_backlog=512 00:11:07.210 verify_state_save=0 00:11:07.210 do_verify=1 00:11:07.210 verify=crc32c-intel 00:11:07.210 [job0] 00:11:07.210 filename=/dev/nvme0n1 00:11:07.210 [job1] 00:11:07.210 filename=/dev/nvme0n2 00:11:07.210 [job2] 00:11:07.210 filename=/dev/nvme0n3 00:11:07.210 [job3] 00:11:07.210 filename=/dev/nvme0n4 00:11:07.210 Could not set queue depth (nvme0n1) 00:11:07.210 Could not set queue depth (nvme0n2) 00:11:07.210 Could not set queue depth (nvme0n3) 00:11:07.210 Could not set queue depth (nvme0n4) 00:11:07.470 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.470 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.470 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.471 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.471 fio-3.35 00:11:07.471 Starting 4 threads 00:11:08.855 00:11:08.855 job0: (groupid=0, jobs=1): err= 0: pid=155006: Sun Nov 17 11:05:33 2024 00:11:08.855 read: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1007msec) 00:11:08.855 slat (usec): min=2, max=26147, avg=149.75, stdev=1069.99 00:11:08.855 clat (usec): min=4361, max=75786, avg=22544.30, stdev=14609.61 00:11:08.855 lat (usec): min=4835, max=76507, avg=22694.05, stdev=14664.82 00:11:08.855 clat percentiles (usec): 00:11:08.855 | 1.00th=[ 7242], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11469], 00:11:08.855 | 30.00th=[12649], 40.00th=[14091], 50.00th=[15533], 60.00th=[20055], 00:11:08.855 | 70.00th=[23987], 80.00th=[34866], 90.00th=[46924], 95.00th=[50594], 00:11:08.855 | 99.00th=[69731], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:11:08.855 | 99.99th=[76022] 00:11:08.855 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:11:08.855 slat (usec): min=4, max=18867, avg=147.14, stdev=933.00 00:11:08.855 clat (msec): min=3, max=132, avg=22.42, stdev=17.41 00:11:08.855 lat (msec): min=3, max=132, avg=22.56, stdev=17.48 00:11:08.855 clat percentiles (msec): 00:11:08.855 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 13], 00:11:08.855 | 30.00th=[ 14], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 22], 00:11:08.855 | 70.00th=[ 24], 80.00th=[ 28], 90.00th=[ 37], 95.00th=[ 43], 00:11:08.855 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 133], 99.95th=[ 133], 00:11:08.855 | 99.99th=[ 133] 00:11:08.855 bw ( KiB/s): min=11016, max=13002, per=20.09%, avg=12009.00, stdev=1404.31, samples=2 00:11:08.855 iops : min= 2754, max= 3250, avg=3002.00, stdev=350.72, samples=2 00:11:08.855 lat (msec) : 4=0.12%, 10=10.92%, 20=45.10%, 50=38.28%, 100=4.62% 00:11:08.855 lat (msec) : 250=0.95% 00:11:08.855 cpu : usr=3.48%, sys=5.27%, ctx=318, majf=0, minf=1 00:11:08.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:08.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.855 issued rwts: total=2615,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.855 job1: (groupid=0, jobs=1): err= 0: pid=155007: Sun Nov 17 11:05:33 2024 00:11:08.855 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:08.855 slat (usec): min=2, max=19707, avg=117.07, stdev=881.75 00:11:08.855 clat (usec): min=5338, max=58339, avg=15520.49, stdev=7822.99 00:11:08.855 lat (usec): min=5345, max=58350, avg=15637.57, stdev=7895.29 00:11:08.855 clat percentiles (usec): 00:11:08.855 | 1.00th=[ 6652], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10421], 00:11:08.855 | 30.00th=[11338], 40.00th=[12518], 50.00th=[12911], 60.00th=[13829], 00:11:08.856 | 70.00th=[15926], 80.00th=[19006], 90.00th=[24773], 95.00th=[31851], 00:11:08.856 | 99.00th=[55313], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:11:08.856 | 99.99th=[58459] 00:11:08.856 write: IOPS=3916, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1003msec); 0 zone resets 00:11:08.856 slat (usec): min=3, max=21160, avg=131.50, stdev=980.63 00:11:08.856 clat (usec): min=1648, max=72462, avg=18221.50, stdev=9901.50 00:11:08.856 lat (usec): min=2977, max=72475, avg=18353.00, stdev=9981.85 00:11:08.856 clat percentiles (usec): 00:11:08.856 | 1.00th=[ 5145], 5.00th=[ 8029], 10.00th=[ 9634], 20.00th=[11338], 00:11:08.856 | 30.00th=[12518], 40.00th=[13435], 50.00th=[14746], 60.00th=[15008], 00:11:08.856 | 70.00th=[19792], 80.00th=[25035], 90.00th=[32113], 95.00th=[40633], 00:11:08.856 | 99.00th=[49546], 99.50th=[51119], 99.90th=[60031], 99.95th=[63701], 00:11:08.856 | 99.99th=[72877] 00:11:08.856 bw ( KiB/s): min=12536, max=17864, per=25.43%, avg=15200.00, stdev=3767.46, samples=2 00:11:08.856 iops : min= 3134, max= 4466, avg=3800.00, stdev=941.87, samples=2 00:11:08.856 lat (msec) : 2=0.01%, 4=0.28%, 10=12.37%, 20=63.86%, 50=22.43% 00:11:08.856 lat (msec) : 100=1.05% 00:11:08.856 cpu : usr=2.99%, sys=6.19%, ctx=244, majf=0, minf=1 00:11:08.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.856 issued rwts: total=3584,3928,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.856 job2: (groupid=0, jobs=1): err= 0: pid=155008: Sun Nov 17 11:05:33 2024 00:11:08.856 read: IOPS=3598, BW=14.1MiB/s (14.7MB/s)(14.2MiB/1008msec) 00:11:08.856 slat (usec): min=2, max=19311, avg=120.57, stdev=859.49 00:11:08.856 clat (usec): min=4470, max=73178, avg=16487.16, stdev=7218.95 00:11:08.856 lat (usec): min=4479, max=75275, avg=16607.74, stdev=7291.43 00:11:08.856 clat percentiles (usec): 00:11:08.856 | 1.00th=[ 4555], 5.00th=[10945], 10.00th=[11994], 20.00th=[12518], 00:11:08.856 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14484], 60.00th=[15795], 00:11:08.856 | 70.00th=[17695], 80.00th=[19268], 90.00th=[21103], 95.00th=[29230], 00:11:08.856 | 99.00th=[52167], 99.50th=[55313], 99.90th=[68682], 99.95th=[68682], 00:11:08.856 | 99.99th=[72877] 00:11:08.856 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:11:08.856 slat (usec): min=3, max=11801, avg=122.07, stdev=767.96 00:11:08.856 clat (usec): min=2670, max=59194, avg=16621.63, stdev=9076.41 00:11:08.856 lat (usec): min=2677, max=59201, avg=16743.70, stdev=9132.54 00:11:08.856 clat percentiles (usec): 00:11:08.856 | 1.00th=[ 4424], 5.00th=[ 7832], 10.00th=[ 9896], 20.00th=[11207], 00:11:08.856 | 30.00th=[11994], 40.00th=[12256], 50.00th=[13435], 60.00th=[14222], 00:11:08.856 | 70.00th=[16712], 80.00th=[20055], 90.00th=[31065], 95.00th=[37487], 00:11:08.856 | 99.00th=[51643], 99.50th=[54789], 99.90th=[54789], 99.95th=[55313], 00:11:08.856 | 99.99th=[58983] 00:11:08.856 bw ( KiB/s): min=14504, max=17584, per=26.84%, avg=16044.00, stdev=2177.89, samples=2 00:11:08.856 iops : min= 3626, max= 4396, avg=4011.00, stdev=544.47, samples=2 00:11:08.856 lat (msec) : 4=0.43%, 10=7.16%, 20=75.84%, 50=15.38%, 100=1.19% 00:11:08.856 cpu : usr=4.67%, sys=6.85%, ctx=241, majf=0, minf=1 00:11:08.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.856 issued rwts: total=3627,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.856 job3: (groupid=0, jobs=1): err= 0: pid=155009: Sun Nov 17 11:05:33 2024 00:11:08.856 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:11:08.856 slat (usec): min=2, max=29755, avg=137.12, stdev=989.69 00:11:08.856 clat (usec): min=8090, max=82750, avg=17634.35, stdev=10692.86 00:11:08.856 lat (usec): min=8097, max=82766, avg=17771.47, stdev=10784.07 00:11:08.856 clat percentiles (usec): 00:11:08.856 | 1.00th=[ 9503], 5.00th=[11469], 10.00th=[12387], 20.00th=[12911], 00:11:08.856 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[14877], 00:11:08.856 | 70.00th=[15664], 80.00th=[20317], 90.00th=[22152], 95.00th=[45876], 00:11:08.856 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[81265], 00:11:08.856 | 99.99th=[82314] 00:11:08.856 write: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1007msec); 0 zone resets 00:11:08.856 slat (usec): min=3, max=15317, avg=120.62, stdev=677.83 00:11:08.856 clat (usec): min=5112, max=60919, avg=16113.83, stdev=7840.24 00:11:08.856 lat (usec): min=5949, max=60938, avg=16234.45, stdev=7893.62 00:11:08.856 clat percentiles (usec): 00:11:08.856 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11600], 00:11:08.856 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13566], 60.00th=[14353], 00:11:08.856 | 70.00th=[15401], 80.00th=[16319], 90.00th=[30278], 95.00th=[36963], 00:11:08.856 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[52167], 00:11:08.856 | 99.99th=[61080] 00:11:08.856 bw ( KiB/s): min=10608, max=20104, per=25.69%, avg=15356.00, stdev=6714.69, samples=2 00:11:08.856 iops : min= 2652, max= 5026, avg=3839.00, stdev=1678.67, samples=2 00:11:08.856 lat (msec) : 10=2.41%, 20=80.49%, 50=15.10%, 100=2.00% 00:11:08.856 cpu : usr=3.88%, sys=5.86%, ctx=340, majf=0, minf=2 00:11:08.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:08.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.856 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.856 issued rwts: total=3584,3967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.856 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.856 00:11:08.856 Run status group 0 (all jobs): 00:11:08.856 READ: bw=52.0MiB/s (54.5MB/s), 10.1MiB/s-14.1MiB/s (10.6MB/s-14.7MB/s), io=52.4MiB (54.9MB), run=1003-1008msec 00:11:08.856 WRITE: bw=58.4MiB/s (61.2MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.6MB/s), io=58.8MiB (61.7MB), run=1003-1008msec 00:11:08.856 00:11:08.856 Disk stats (read/write): 00:11:08.856 nvme0n1: ios=2369/2560, merge=0/0, ticks=25537/35046, in_queue=60583, util=96.89% 00:11:08.856 nvme0n2: ios=2958/3072, merge=0/0, ticks=35794/45761, in_queue=81555, util=98.48% 00:11:08.856 nvme0n3: ios=3106/3564, merge=0/0, ticks=40980/42176, in_queue=83156, util=98.34% 00:11:08.856 nvme0n4: ios=3303/3584, merge=0/0, ticks=17721/13950, in_queue=31671, util=89.62% 00:11:08.856 11:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:08.856 [global] 00:11:08.856 thread=1 00:11:08.856 invalidate=1 00:11:08.856 rw=randwrite 00:11:08.856 time_based=1 00:11:08.856 runtime=1 00:11:08.856 ioengine=libaio 00:11:08.856 direct=1 00:11:08.856 bs=4096 00:11:08.856 iodepth=128 00:11:08.856 norandommap=0 00:11:08.856 numjobs=1 00:11:08.856 00:11:08.856 verify_dump=1 00:11:08.856 verify_backlog=512 00:11:08.856 verify_state_save=0 00:11:08.856 do_verify=1 00:11:08.856 verify=crc32c-intel 00:11:08.856 [job0] 00:11:08.856 filename=/dev/nvme0n1 00:11:08.856 [job1] 00:11:08.856 filename=/dev/nvme0n2 00:11:08.856 [job2] 00:11:08.856 filename=/dev/nvme0n3 00:11:08.856 [job3] 00:11:08.856 filename=/dev/nvme0n4 00:11:08.856 Could not set queue depth (nvme0n1) 00:11:08.856 Could not set queue depth (nvme0n2) 00:11:08.856 Could not set queue depth (nvme0n3) 00:11:08.856 Could not set queue depth (nvme0n4) 00:11:08.856 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.856 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.856 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.856 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.856 fio-3.35 00:11:08.856 Starting 4 threads 00:11:10.239 00:11:10.239 job0: (groupid=0, jobs=1): err= 0: pid=155233: Sun Nov 17 11:05:34 2024 00:11:10.239 read: IOPS=2167, BW=8670KiB/s (8878kB/s)(8696KiB/1003msec) 00:11:10.239 slat (usec): min=3, max=11051, avg=212.72, stdev=1011.59 00:11:10.239 clat (usec): min=1205, max=57424, avg=26463.85, stdev=12594.45 00:11:10.239 lat (usec): min=5767, max=57429, avg=26676.57, stdev=12655.25 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 6128], 5.00th=[10945], 10.00th=[11338], 20.00th=[12125], 00:11:10.239 | 30.00th=[21890], 40.00th=[22938], 50.00th=[24773], 60.00th=[25560], 00:11:10.239 | 70.00th=[29754], 80.00th=[36963], 90.00th=[46400], 95.00th=[51119], 00:11:10.239 | 99.00th=[55837], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:11:10.239 | 99.99th=[57410] 00:11:10.239 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:11:10.239 slat (usec): min=3, max=10873, avg=200.65, stdev=920.49 00:11:10.239 clat (usec): min=8193, max=61290, avg=27023.95, stdev=12869.70 00:11:10.239 lat (usec): min=8516, max=61308, avg=27224.60, stdev=12950.60 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 8979], 5.00th=[10945], 10.00th=[11338], 20.00th=[19006], 00:11:10.239 | 30.00th=[19792], 40.00th=[21365], 50.00th=[24249], 60.00th=[25035], 00:11:10.239 | 70.00th=[30278], 80.00th=[35914], 90.00th=[49546], 95.00th=[54789], 00:11:10.239 | 99.00th=[60031], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:11:10.239 | 99.99th=[61080] 00:11:10.239 bw ( KiB/s): min= 8192, max=12280, per=16.26%, avg=10236.00, stdev=2890.65, samples=2 00:11:10.239 iops : min= 2048, max= 3070, avg=2559.00, stdev=722.66, samples=2 00:11:10.239 lat (msec) : 2=0.02%, 10=2.70%, 20=26.45%, 50=62.17%, 100=8.66% 00:11:10.239 cpu : usr=3.39%, sys=4.59%, ctx=295, majf=0, minf=1 00:11:10.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:10.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.239 issued rwts: total=2174,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.239 job1: (groupid=0, jobs=1): err= 0: pid=155234: Sun Nov 17 11:05:34 2024 00:11:10.239 read: IOPS=4680, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1008msec) 00:11:10.239 slat (usec): min=2, max=16270, avg=117.96, stdev=789.31 00:11:10.239 clat (usec): min=3743, max=52100, avg=14531.16, stdev=9096.34 00:11:10.239 lat (usec): min=3756, max=52118, avg=14649.11, stdev=9170.68 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 4424], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[10028], 00:11:10.239 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10814], 60.00th=[11863], 00:11:10.239 | 70.00th=[13960], 80.00th=[16909], 90.00th=[22938], 95.00th=[40633], 00:11:10.239 | 99.00th=[47973], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:11:10.239 | 99.99th=[52167] 00:11:10.239 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:11:10.239 slat (usec): min=2, max=15151, avg=73.75, stdev=502.98 00:11:10.239 clat (usec): min=187, max=47811, avg=11584.34, stdev=5862.54 00:11:10.239 lat (usec): min=384, max=47827, avg=11658.09, stdev=5900.09 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 2704], 5.00th=[ 4490], 10.00th=[ 6063], 20.00th=[ 9110], 00:11:10.239 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11207], 60.00th=[11469], 00:11:10.239 | 70.00th=[11600], 80.00th=[11863], 90.00th=[14615], 95.00th=[23200], 00:11:10.239 | 99.00th=[40109], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:11:10.239 | 99.99th=[47973] 00:11:10.239 bw ( KiB/s): min=18760, max=22064, per=32.42%, avg=20412.00, stdev=2336.28, samples=2 00:11:10.239 iops : min= 4690, max= 5516, avg=5103.00, stdev=584.07, samples=2 00:11:10.239 lat (usec) : 250=0.01%, 500=0.06%, 1000=0.09% 00:11:10.239 lat (msec) : 2=0.16%, 4=1.78%, 10=26.24%, 20=61.77%, 50=9.46% 00:11:10.239 lat (msec) : 100=0.43% 00:11:10.239 cpu : usr=3.28%, sys=7.05%, ctx=532, majf=0, minf=1 00:11:10.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:10.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.239 issued rwts: total=4718,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.239 job2: (groupid=0, jobs=1): err= 0: pid=155235: Sun Nov 17 11:05:34 2024 00:11:10.239 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:10.239 slat (usec): min=2, max=9582, avg=100.83, stdev=584.22 00:11:10.239 clat (usec): min=6968, max=31667, avg=13269.27, stdev=3447.37 00:11:10.239 lat (usec): min=6977, max=31677, avg=13370.10, stdev=3488.14 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[10814], 00:11:10.239 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:11:10.239 | 70.00th=[13566], 80.00th=[14746], 90.00th=[17171], 95.00th=[20841], 00:11:10.239 | 99.00th=[26084], 99.50th=[27919], 99.90th=[31589], 99.95th=[31589], 00:11:10.239 | 99.99th=[31589] 00:11:10.239 write: IOPS=5106, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:11:10.239 slat (usec): min=3, max=9398, avg=95.21, stdev=537.94 00:11:10.239 clat (usec): min=530, max=29840, avg=12813.56, stdev=2911.39 00:11:10.239 lat (usec): min=545, max=29865, avg=12908.77, stdev=2945.18 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 6194], 5.00th=[ 8291], 10.00th=[ 9896], 20.00th=[11207], 00:11:10.239 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:11:10.239 | 70.00th=[13173], 80.00th=[13566], 90.00th=[15795], 95.00th=[18220], 00:11:10.239 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24249], 99.95th=[25035], 00:11:10.239 | 99.99th=[29754] 00:11:10.239 bw ( KiB/s): min=18104, max=18104, per=28.76%, avg=18104.00, stdev= 0.00, samples=1 00:11:10.239 iops : min= 4526, max= 4526, avg=4526.00, stdev= 0.00, samples=1 00:11:10.239 lat (usec) : 750=0.02% 00:11:10.239 lat (msec) : 10=8.68%, 20=87.07%, 50=4.23% 00:11:10.239 cpu : usr=5.70%, sys=9.90%, ctx=516, majf=0, minf=1 00:11:10.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:10.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.239 issued rwts: total=4608,5112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.239 job3: (groupid=0, jobs=1): err= 0: pid=155236: Sun Nov 17 11:05:34 2024 00:11:10.239 read: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1003msec) 00:11:10.239 slat (usec): min=2, max=37302, avg=183.69, stdev=1176.27 00:11:10.239 clat (usec): min=881, max=54829, avg=22924.28, stdev=7835.42 00:11:10.239 lat (usec): min=6078, max=54841, avg=23107.96, stdev=7902.80 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 6325], 5.00th=[ 9110], 10.00th=[12649], 20.00th=[16909], 00:11:10.239 | 30.00th=[19006], 40.00th=[21365], 50.00th=[23725], 60.00th=[24773], 00:11:10.239 | 70.00th=[26870], 80.00th=[28967], 90.00th=[32637], 95.00th=[37487], 00:11:10.239 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[43779], 00:11:10.239 | 99.99th=[54789] 00:11:10.239 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:10.239 slat (usec): min=3, max=15804, avg=131.42, stdev=768.53 00:11:10.239 clat (usec): min=3543, max=52195, avg=18624.85, stdev=8064.68 00:11:10.239 lat (usec): min=3551, max=52244, avg=18756.27, stdev=8103.83 00:11:10.239 clat percentiles (usec): 00:11:10.239 | 1.00th=[ 6849], 5.00th=[ 8455], 10.00th=[11207], 20.00th=[11731], 00:11:10.239 | 30.00th=[12518], 40.00th=[16712], 50.00th=[17433], 60.00th=[18744], 00:11:10.239 | 70.00th=[21365], 80.00th=[24511], 90.00th=[26870], 95.00th=[32637], 00:11:10.239 | 99.00th=[47449], 99.50th=[51119], 99.90th=[52167], 99.95th=[52167], 00:11:10.239 | 99.99th=[52167] 00:11:10.239 bw ( KiB/s): min= 8872, max=15704, per=19.52%, avg=12288.00, stdev=4830.95, samples=2 00:11:10.239 iops : min= 2218, max= 3926, avg=3072.00, stdev=1207.74, samples=2 00:11:10.239 lat (usec) : 1000=0.02% 00:11:10.240 lat (msec) : 4=0.16%, 10=6.35%, 20=44.96%, 50=48.03%, 100=0.48% 00:11:10.240 cpu : usr=3.19%, sys=6.19%, ctx=267, majf=0, minf=1 00:11:10.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:10.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.240 issued rwts: total=3009,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.240 00:11:10.240 Run status group 0 (all jobs): 00:11:10.240 READ: bw=56.2MiB/s (59.0MB/s), 8670KiB/s-18.3MiB/s (8878kB/s-19.2MB/s), io=56.7MiB (59.4MB), run=1001-1008msec 00:11:10.240 WRITE: bw=61.5MiB/s (64.5MB/s), 9.97MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=62.0MiB (65.0MB), run=1001-1008msec 00:11:10.240 00:11:10.240 Disk stats (read/write): 00:11:10.240 nvme0n1: ios=2012/2048, merge=0/0, ticks=15322/13203, in_queue=28525, util=96.49% 00:11:10.240 nvme0n2: ios=3999/4096, merge=0/0, ticks=46017/40447, in_queue=86464, util=86.90% 00:11:10.240 nvme0n3: ios=4092/4103, merge=0/0, ticks=25565/24342, in_queue=49907, util=88.95% 00:11:10.240 nvme0n4: ios=2560/2632, merge=0/0, ticks=28952/20802, in_queue=49754, util=89.60% 00:11:10.240 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:10.240 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=155374 00:11:10.240 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:10.240 11:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:10.240 [global] 00:11:10.240 thread=1 00:11:10.240 invalidate=1 00:11:10.240 rw=read 00:11:10.240 time_based=1 00:11:10.240 runtime=10 00:11:10.240 ioengine=libaio 00:11:10.240 direct=1 00:11:10.240 bs=4096 00:11:10.240 iodepth=1 00:11:10.240 norandommap=1 00:11:10.240 numjobs=1 00:11:10.240 00:11:10.240 [job0] 00:11:10.240 filename=/dev/nvme0n1 00:11:10.240 [job1] 00:11:10.240 filename=/dev/nvme0n2 00:11:10.240 [job2] 00:11:10.240 filename=/dev/nvme0n3 00:11:10.240 [job3] 00:11:10.240 filename=/dev/nvme0n4 00:11:10.240 Could not set queue depth (nvme0n1) 00:11:10.240 Could not set queue depth (nvme0n2) 00:11:10.240 Could not set queue depth (nvme0n3) 00:11:10.240 Could not set queue depth (nvme0n4) 00:11:10.240 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.240 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.240 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.240 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.240 fio-3.35 00:11:10.240 Starting 4 threads 00:11:13.537 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:13.537 11:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:13.537 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=7106560, buflen=4096 00:11:13.537 fio: pid=155468, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.537 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.537 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:13.796 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=28577792, buflen=4096 00:11:13.796 fio: pid=155466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.054 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.054 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.054 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29626368, buflen=4096 00:11:14.054 fio: pid=155462, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.313 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=41750528, buflen=4096 00:11:14.313 fio: pid=155465, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.313 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.313 11:05:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.313 00:11:14.313 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155462: Sun Nov 17 11:05:38 2024 00:11:14.313 read: IOPS=2072, BW=8290KiB/s (8489kB/s)(28.3MiB/3490msec) 00:11:14.313 slat (usec): min=5, max=13945, avg=15.76, stdev=201.11 00:11:14.313 clat (usec): min=187, max=46075, avg=460.18, stdev=2511.23 00:11:14.313 lat (usec): min=193, max=46089, avg=475.94, stdev=2519.38 00:11:14.313 clat percentiles (usec): 00:11:14.313 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 235], 20.00th=[ 249], 00:11:14.313 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 289], 00:11:14.313 | 70.00th=[ 310], 80.00th=[ 359], 90.00th=[ 437], 95.00th=[ 482], 00:11:14.313 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[41681], 99.95th=[42206], 00:11:14.313 | 99.99th=[45876] 00:11:14.313 bw ( KiB/s): min= 160, max=12176, per=32.47%, avg=9005.33, stdev=4512.07, samples=6 00:11:14.313 iops : min= 40, max= 3044, avg=2251.33, stdev=1128.02, samples=6 00:11:14.313 lat (usec) : 250=20.31%, 500=76.14%, 750=3.11%, 1000=0.03% 00:11:14.313 lat (msec) : 10=0.03%, 50=0.37% 00:11:14.313 cpu : usr=1.78%, sys=3.96%, ctx=7238, majf=0, minf=1 00:11:14.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 issued rwts: total=7234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.313 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155465: Sun Nov 17 11:05:38 2024 00:11:14.313 read: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(39.8MiB/3770msec) 00:11:14.313 slat (usec): min=4, max=16698, avg=19.54, stdev=255.48 00:11:14.313 clat (usec): min=159, max=41992, avg=344.67, stdev=1808.27 00:11:14.313 lat (usec): min=163, max=42014, avg=363.65, stdev=1825.73 00:11:14.313 clat percentiles (usec): 00:11:14.313 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 200], 00:11:14.313 | 30.00th=[ 219], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 260], 00:11:14.313 | 70.00th=[ 281], 80.00th=[ 314], 90.00th=[ 379], 95.00th=[ 441], 00:11:14.313 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:11:14.313 | 99.99th=[42206] 00:11:14.313 bw ( KiB/s): min= 176, max=14472, per=40.69%, avg=11284.71, stdev=5002.06, samples=7 00:11:14.313 iops : min= 44, max= 3618, avg=2821.14, stdev=1250.50, samples=7 00:11:14.313 lat (usec) : 250=53.43%, 500=45.37%, 750=0.98% 00:11:14.313 lat (msec) : 10=0.01%, 50=0.20% 00:11:14.313 cpu : usr=1.91%, sys=4.56%, ctx=10202, majf=0, minf=1 00:11:14.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 issued rwts: total=10194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.313 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155466: Sun Nov 17 11:05:38 2024 00:11:14.313 read: IOPS=2166, BW=8664KiB/s (8872kB/s)(27.3MiB/3221msec) 00:11:14.313 slat (nsec): min=5416, max=55997, avg=13250.14, stdev=4985.55 00:11:14.313 clat (usec): min=192, max=41194, avg=441.24, stdev=2571.83 00:11:14.313 lat (usec): min=200, max=41201, avg=454.49, stdev=2571.99 00:11:14.313 clat percentiles (usec): 00:11:14.313 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 253], 00:11:14.313 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:11:14.313 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 322], 00:11:14.313 | 99.00th=[ 388], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41157], 00:11:14.313 | 99.99th=[41157] 00:11:14.313 bw ( KiB/s): min= 104, max=14376, per=33.51%, avg=9294.67, stdev=5501.39, samples=6 00:11:14.313 iops : min= 26, max= 3594, avg=2323.67, stdev=1375.35, samples=6 00:11:14.313 lat (usec) : 250=16.34%, 500=83.10%, 750=0.10%, 1000=0.04% 00:11:14.313 lat (msec) : 50=0.40% 00:11:14.313 cpu : usr=1.89%, sys=4.50%, ctx=6980, majf=0, minf=2 00:11:14.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 issued rwts: total=6978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.313 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155468: Sun Nov 17 11:05:38 2024 00:11:14.313 read: IOPS=594, BW=2375KiB/s (2432kB/s)(6940KiB/2922msec) 00:11:14.313 slat (nsec): min=5739, max=58872, avg=9491.68, stdev=5409.17 00:11:14.313 clat (usec): min=198, max=42019, avg=1659.10, stdev=7437.64 00:11:14.313 lat (usec): min=205, max=42037, avg=1668.58, stdev=7440.55 00:11:14.313 clat percentiles (usec): 00:11:14.313 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:11:14.313 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:11:14.313 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 379], 00:11:14.313 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:11:14.313 | 99.99th=[42206] 00:11:14.313 bw ( KiB/s): min= 96, max=13296, per=9.94%, avg=2756.80, stdev=5891.70, samples=5 00:11:14.313 iops : min= 24, max= 3324, avg=689.20, stdev=1472.93, samples=5 00:11:14.313 lat (usec) : 250=64.69%, 500=31.28%, 750=0.46% 00:11:14.313 lat (msec) : 2=0.06%, 50=3.46% 00:11:14.313 cpu : usr=0.24%, sys=0.96%, ctx=1737, majf=0, minf=2 00:11:14.313 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.313 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.313 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.313 00:11:14.313 Run status group 0 (all jobs): 00:11:14.313 READ: bw=27.1MiB/s (28.4MB/s), 2375KiB/s-10.6MiB/s (2432kB/s-11.1MB/s), io=102MiB (107MB), run=2922-3770msec 00:11:14.313 00:11:14.313 Disk stats (read/write): 00:11:14.313 nvme0n1: ios=7134/0, merge=0/0, ticks=4270/0, in_queue=4270, util=98.86% 00:11:14.313 nvme0n2: ios=10232/0, merge=0/0, ticks=3701/0, in_queue=3701, util=98.28% 00:11:14.313 nvme0n3: ios=6974/0, merge=0/0, ticks=2884/0, in_queue=2884, util=96.75% 00:11:14.313 nvme0n4: ios=1733/0, merge=0/0, ticks=2784/0, in_queue=2784, util=96.74% 00:11:14.572 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.572 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:14.829 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.830 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:15.088 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.088 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.346 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.346 11:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.604 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:15.604 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 155374 00:11:15.604 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:15.604 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:15.863 nvmf hotplug test: fio failed as expected 00:11:15.863 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.122 rmmod nvme_tcp 00:11:16.122 rmmod nvme_fabrics 00:11:16.122 rmmod nvme_keyring 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 153333 ']' 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 153333 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 153333 ']' 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 153333 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153333 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153333' 00:11:16.122 killing process with pid 153333 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 153333 00:11:16.122 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 153333 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.383 11:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.296 00:11:18.296 real 0m24.319s 00:11:18.296 user 1m25.613s 00:11:18.296 sys 0m7.127s 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.296 ************************************ 00:11:18.296 END TEST nvmf_fio_target 00:11:18.296 ************************************ 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.296 11:05:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.557 ************************************ 00:11:18.557 START TEST nvmf_bdevio 00:11:18.557 ************************************ 00:11:18.557 11:05:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.557 * Looking for test storage... 00:11:18.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.557 --rc genhtml_branch_coverage=1 00:11:18.557 --rc genhtml_function_coverage=1 00:11:18.557 --rc genhtml_legend=1 00:11:18.557 --rc geninfo_all_blocks=1 00:11:18.557 --rc geninfo_unexecuted_blocks=1 00:11:18.557 00:11:18.557 ' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.557 --rc genhtml_branch_coverage=1 00:11:18.557 --rc genhtml_function_coverage=1 00:11:18.557 --rc genhtml_legend=1 00:11:18.557 --rc geninfo_all_blocks=1 00:11:18.557 --rc geninfo_unexecuted_blocks=1 00:11:18.557 00:11:18.557 ' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.557 --rc genhtml_branch_coverage=1 00:11:18.557 --rc genhtml_function_coverage=1 00:11:18.557 --rc genhtml_legend=1 00:11:18.557 --rc geninfo_all_blocks=1 00:11:18.557 --rc geninfo_unexecuted_blocks=1 00:11:18.557 00:11:18.557 ' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.557 --rc genhtml_branch_coverage=1 00:11:18.557 --rc genhtml_function_coverage=1 00:11:18.557 --rc genhtml_legend=1 00:11:18.557 --rc geninfo_all_blocks=1 00:11:18.557 --rc geninfo_unexecuted_blocks=1 00:11:18.557 00:11:18.557 ' 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.557 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.558 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.558 11:05:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.099 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.099 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.099 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.099 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.099 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:11:21.100 00:11:21.100 --- 10.0.0.2 ping statistics --- 00:11:21.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.100 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:21.100 00:11:21.100 --- 10.0.0.1 ping statistics --- 00:11:21.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.100 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=158225 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 158225 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 158225 ']' 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.100 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.100 [2024-11-17 11:05:45.542680] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:21.100 [2024-11-17 11:05:45.542757] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.100 [2024-11-17 11:05:45.613072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.100 [2024-11-17 11:05:45.656915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.100 [2024-11-17 11:05:45.656976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.100 [2024-11-17 11:05:45.657012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.100 [2024-11-17 11:05:45.657024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.100 [2024-11-17 11:05:45.657034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.100 [2024-11-17 11:05:45.658631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.100 [2024-11-17 11:05:45.658695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:21.100 [2024-11-17 11:05:45.658760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:21.100 [2024-11-17 11:05:45.658763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.380 [2024-11-17 11:05:45.802068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.380 Malloc0 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.380 [2024-11-17 11:05:45.867728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:21.380 { 00:11:21.380 "params": { 00:11:21.380 "name": "Nvme$subsystem", 00:11:21.380 "trtype": "$TEST_TRANSPORT", 00:11:21.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.380 "adrfam": "ipv4", 00:11:21.380 "trsvcid": "$NVMF_PORT", 00:11:21.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.380 "hdgst": ${hdgst:-false}, 00:11:21.380 "ddgst": ${ddgst:-false} 00:11:21.380 }, 00:11:21.380 "method": "bdev_nvme_attach_controller" 00:11:21.380 } 00:11:21.380 EOF 00:11:21.380 )") 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:21.380 11:05:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:21.381 "params": { 00:11:21.381 "name": "Nvme1", 00:11:21.381 "trtype": "tcp", 00:11:21.381 "traddr": "10.0.0.2", 00:11:21.381 "adrfam": "ipv4", 00:11:21.381 "trsvcid": "4420", 00:11:21.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.381 "hdgst": false, 00:11:21.381 "ddgst": false 00:11:21.381 }, 00:11:21.381 "method": "bdev_nvme_attach_controller" 00:11:21.381 }' 00:11:21.381 [2024-11-17 11:05:45.917675] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:21.381 [2024-11-17 11:05:45.917740] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158249 ] 00:11:21.381 [2024-11-17 11:05:45.986614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.641 [2024-11-17 11:05:46.038883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.641 [2024-11-17 11:05:46.038934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.641 [2024-11-17 11:05:46.038937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.641 I/O targets: 00:11:21.641 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:21.641 00:11:21.641 00:11:21.641 CUnit - A unit testing framework for C - Version 2.1-3 00:11:21.641 http://cunit.sourceforge.net/ 00:11:21.641 00:11:21.641 00:11:21.641 Suite: bdevio tests on: Nvme1n1 00:11:21.641 Test: blockdev write read block ...passed 00:11:21.901 Test: blockdev write zeroes read block ...passed 00:11:21.901 Test: blockdev write zeroes read no split ...passed 00:11:21.901 Test: blockdev write zeroes read split ...passed 00:11:21.901 Test: blockdev write zeroes read split partial ...passed 00:11:21.901 Test: blockdev reset ...[2024-11-17 11:05:46.334731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:21.901 [2024-11-17 11:05:46.334840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ffb70 (9): Bad file descriptor 00:11:21.901 [2024-11-17 11:05:46.439445] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:21.901 passed 00:11:21.901 Test: blockdev write read 8 blocks ...passed 00:11:21.901 Test: blockdev write read size > 128k ...passed 00:11:21.901 Test: blockdev write read invalid size ...passed 00:11:21.901 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:21.901 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:21.901 Test: blockdev write read max offset ...passed 00:11:22.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.162 Test: blockdev writev readv 8 blocks ...passed 00:11:22.162 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.162 Test: blockdev writev readv block ...passed 00:11:22.162 Test: blockdev writev readv size > 128k ...passed 00:11:22.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.162 Test: blockdev comparev and writev ...[2024-11-17 11:05:46.611718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.611754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.611779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.611808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.612152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.612177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.612201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.612217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.612564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.612592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.612614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.612630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.612950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.612974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.162 [2024-11-17 11:05:46.612995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.162 [2024-11-17 11:05:46.613011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.162 passed 00:11:22.162 Test: blockdev nvme passthru rw ...passed 00:11:22.162 Test: blockdev nvme passthru vendor specific ...[2024-11-17 11:05:46.695892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.162 [2024-11-17 11:05:46.695920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.163 [2024-11-17 11:05:46.696065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.163 [2024-11-17 11:05:46.696089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.163 [2024-11-17 11:05:46.696228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.163 [2024-11-17 11:05:46.696252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.163 [2024-11-17 11:05:46.696393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.163 [2024-11-17 11:05:46.696416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.163 passed 00:11:22.163 Test: blockdev nvme admin passthru ...passed 00:11:22.163 Test: blockdev copy ...passed 00:11:22.163 00:11:22.163 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.163 suites 1 1 n/a 0 0 00:11:22.163 tests 23 23 23 0 0 00:11:22.163 asserts 152 152 152 0 n/a 00:11:22.163 00:11:22.163 Elapsed time = 1.078 seconds 00:11:22.423 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.423 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.423 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.423 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.423 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:22.423 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:22.424 rmmod nvme_tcp 00:11:22.424 rmmod nvme_fabrics 00:11:22.424 rmmod nvme_keyring 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 158225 ']' 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 158225 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 158225 ']' 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 158225 00:11:22.424 11:05:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158225 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158225' 00:11:22.424 killing process with pid 158225 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 158225 00:11:22.424 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 158225 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.683 11:05:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.241 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.241 00:11:25.241 real 0m6.347s 00:11:25.241 user 0m9.278s 00:11:25.241 sys 0m2.149s 00:11:25.241 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.241 11:05:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.241 ************************************ 00:11:25.241 END TEST nvmf_bdevio 00:11:25.241 ************************************ 00:11:25.241 11:05:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:25.241 00:11:25.242 real 3m54.875s 00:11:25.242 user 10m10.738s 00:11:25.242 sys 1m7.303s 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.242 ************************************ 00:11:25.242 END TEST nvmf_target_core 00:11:25.242 ************************************ 00:11:25.242 11:05:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.242 11:05:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.242 11:05:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.242 11:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.242 ************************************ 00:11:25.242 START TEST nvmf_target_extra 00:11:25.242 ************************************ 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.242 * Looking for test storage... 00:11:25.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.242 --rc genhtml_branch_coverage=1 00:11:25.242 --rc genhtml_function_coverage=1 00:11:25.242 --rc genhtml_legend=1 00:11:25.242 --rc geninfo_all_blocks=1 00:11:25.242 --rc geninfo_unexecuted_blocks=1 00:11:25.242 00:11:25.242 ' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.242 --rc genhtml_branch_coverage=1 00:11:25.242 --rc genhtml_function_coverage=1 00:11:25.242 --rc genhtml_legend=1 00:11:25.242 --rc geninfo_all_blocks=1 00:11:25.242 --rc geninfo_unexecuted_blocks=1 00:11:25.242 00:11:25.242 ' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.242 --rc genhtml_branch_coverage=1 00:11:25.242 --rc genhtml_function_coverage=1 00:11:25.242 --rc genhtml_legend=1 00:11:25.242 --rc geninfo_all_blocks=1 00:11:25.242 --rc geninfo_unexecuted_blocks=1 00:11:25.242 00:11:25.242 ' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.242 --rc genhtml_branch_coverage=1 00:11:25.242 --rc genhtml_function_coverage=1 00:11:25.242 --rc genhtml_legend=1 00:11:25.242 --rc geninfo_all_blocks=1 00:11:25.242 --rc geninfo_unexecuted_blocks=1 00:11:25.242 00:11:25.242 ' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:25.242 11:05:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.243 ************************************ 00:11:25.243 START TEST nvmf_example 00:11:25.243 ************************************ 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.243 * Looking for test storage... 00:11:25.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.243 --rc genhtml_branch_coverage=1 00:11:25.243 --rc genhtml_function_coverage=1 00:11:25.243 --rc genhtml_legend=1 00:11:25.243 --rc geninfo_all_blocks=1 00:11:25.243 --rc geninfo_unexecuted_blocks=1 00:11:25.243 00:11:25.243 ' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.243 --rc genhtml_branch_coverage=1 00:11:25.243 --rc genhtml_function_coverage=1 00:11:25.243 --rc genhtml_legend=1 00:11:25.243 --rc geninfo_all_blocks=1 00:11:25.243 --rc geninfo_unexecuted_blocks=1 00:11:25.243 00:11:25.243 ' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.243 --rc genhtml_branch_coverage=1 00:11:25.243 --rc genhtml_function_coverage=1 00:11:25.243 --rc genhtml_legend=1 00:11:25.243 --rc geninfo_all_blocks=1 00:11:25.243 --rc geninfo_unexecuted_blocks=1 00:11:25.243 00:11:25.243 ' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.243 --rc genhtml_branch_coverage=1 00:11:25.243 --rc genhtml_function_coverage=1 00:11:25.243 --rc genhtml_legend=1 00:11:25.243 --rc geninfo_all_blocks=1 00:11:25.243 --rc geninfo_unexecuted_blocks=1 00:11:25.243 00:11:25.243 ' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.243 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.244 11:05:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.783 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:27.784 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:27.784 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:27.784 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:27.784 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.784 11:05:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:11:27.784 00:11:27.784 --- 10.0.0.2 ping statistics --- 00:11:27.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.784 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:11:27.784 00:11:27.784 --- 10.0.0.1 ping statistics --- 00:11:27.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.784 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=160503 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 160503 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 160503 ']' 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.784 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:28.045 11:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.353 Initializing NVMe Controllers 00:11:40.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:40.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:40.353 Initialization complete. Launching workers. 00:11:40.353 ======================================================== 00:11:40.354 Latency(us) 00:11:40.354 Device Information : IOPS MiB/s Average min max 00:11:40.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14558.17 56.87 4396.16 896.85 15293.51 00:11:40.354 ======================================================== 00:11:40.354 Total : 14558.17 56.87 4396.16 896.85 15293.51 00:11:40.354 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:40.354 rmmod nvme_tcp 00:11:40.354 rmmod nvme_fabrics 00:11:40.354 rmmod nvme_keyring 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 160503 ']' 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 160503 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 160503 ']' 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 160503 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160503 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160503' 00:11:40.354 killing process with pid 160503 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 160503 00:11:40.354 11:06:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 160503 00:11:40.354 nvmf threads initialize successfully 00:11:40.354 bdev subsystem init successfully 00:11:40.354 created a nvmf target service 00:11:40.354 create targets's poll groups done 00:11:40.354 all subsystems of target started 00:11:40.354 nvmf target is running 00:11:40.354 all subsystems of target stopped 00:11:40.354 destroy targets's poll groups done 00:11:40.354 destroyed the nvmf target service 00:11:40.354 bdev subsystem finish successfully 00:11:40.354 nvmf threads destroy successfully 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.354 11:06:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.615 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.615 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:40.615 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.615 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.877 00:11:40.877 real 0m15.671s 00:11:40.877 user 0m42.032s 00:11:40.877 sys 0m3.888s 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:40.877 ************************************ 00:11:40.877 END TEST nvmf_example 00:11:40.877 ************************************ 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.877 ************************************ 00:11:40.877 START TEST nvmf_filesystem 00:11:40.877 ************************************ 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.877 * Looking for test storage... 00:11:40.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.877 --rc genhtml_branch_coverage=1 00:11:40.877 --rc genhtml_function_coverage=1 00:11:40.877 --rc genhtml_legend=1 00:11:40.877 --rc geninfo_all_blocks=1 00:11:40.877 --rc geninfo_unexecuted_blocks=1 00:11:40.877 00:11:40.877 ' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.877 --rc genhtml_branch_coverage=1 00:11:40.877 --rc genhtml_function_coverage=1 00:11:40.877 --rc genhtml_legend=1 00:11:40.877 --rc geninfo_all_blocks=1 00:11:40.877 --rc geninfo_unexecuted_blocks=1 00:11:40.877 00:11:40.877 ' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.877 --rc genhtml_branch_coverage=1 00:11:40.877 --rc genhtml_function_coverage=1 00:11:40.877 --rc genhtml_legend=1 00:11:40.877 --rc geninfo_all_blocks=1 00:11:40.877 --rc geninfo_unexecuted_blocks=1 00:11:40.877 00:11:40.877 ' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:40.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.877 --rc genhtml_branch_coverage=1 00:11:40.877 --rc genhtml_function_coverage=1 00:11:40.877 --rc genhtml_legend=1 00:11:40.877 --rc geninfo_all_blocks=1 00:11:40.877 --rc geninfo_unexecuted_blocks=1 00:11:40.877 00:11:40.877 ' 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:40.877 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:40.878 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:40.879 #define SPDK_CONFIG_H 00:11:40.879 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:40.879 #define SPDK_CONFIG_APPS 1 00:11:40.879 #define SPDK_CONFIG_ARCH native 00:11:40.879 #undef SPDK_CONFIG_ASAN 00:11:40.879 #undef SPDK_CONFIG_AVAHI 00:11:40.879 #undef SPDK_CONFIG_CET 00:11:40.879 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:40.879 #define SPDK_CONFIG_COVERAGE 1 00:11:40.879 #define SPDK_CONFIG_CROSS_PREFIX 00:11:40.879 #undef SPDK_CONFIG_CRYPTO 00:11:40.879 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:40.879 #undef SPDK_CONFIG_CUSTOMOCF 00:11:40.879 #undef SPDK_CONFIG_DAOS 00:11:40.879 #define SPDK_CONFIG_DAOS_DIR 00:11:40.879 #define SPDK_CONFIG_DEBUG 1 00:11:40.879 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:40.879 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:40.879 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:40.879 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.879 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:40.879 #undef SPDK_CONFIG_DPDK_UADK 00:11:40.879 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:40.879 #define SPDK_CONFIG_EXAMPLES 1 00:11:40.879 #undef SPDK_CONFIG_FC 00:11:40.879 #define SPDK_CONFIG_FC_PATH 00:11:40.879 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:40.879 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:40.879 #define SPDK_CONFIG_FSDEV 1 00:11:40.879 #undef SPDK_CONFIG_FUSE 00:11:40.879 #undef SPDK_CONFIG_FUZZER 00:11:40.879 #define SPDK_CONFIG_FUZZER_LIB 00:11:40.879 #undef SPDK_CONFIG_GOLANG 00:11:40.879 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:40.879 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:40.879 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:40.879 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:40.879 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:40.879 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:40.879 #undef SPDK_CONFIG_HAVE_LZ4 00:11:40.879 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:40.879 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:40.879 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:40.879 #define SPDK_CONFIG_IDXD 1 00:11:40.879 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:40.879 #undef SPDK_CONFIG_IPSEC_MB 00:11:40.879 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:40.879 #define SPDK_CONFIG_ISAL 1 00:11:40.879 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:40.879 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:40.879 #define SPDK_CONFIG_LIBDIR 00:11:40.879 #undef SPDK_CONFIG_LTO 00:11:40.879 #define SPDK_CONFIG_MAX_LCORES 128 00:11:40.879 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:40.879 #define SPDK_CONFIG_NVME_CUSE 1 00:11:40.879 #undef SPDK_CONFIG_OCF 00:11:40.879 #define SPDK_CONFIG_OCF_PATH 00:11:40.879 #define SPDK_CONFIG_OPENSSL_PATH 00:11:40.879 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:40.879 #define SPDK_CONFIG_PGO_DIR 00:11:40.879 #undef SPDK_CONFIG_PGO_USE 00:11:40.879 #define SPDK_CONFIG_PREFIX /usr/local 00:11:40.879 #undef SPDK_CONFIG_RAID5F 00:11:40.879 #undef SPDK_CONFIG_RBD 00:11:40.879 #define SPDK_CONFIG_RDMA 1 00:11:40.879 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:40.879 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:40.879 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:40.879 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:40.879 #define SPDK_CONFIG_SHARED 1 00:11:40.879 #undef SPDK_CONFIG_SMA 00:11:40.879 #define SPDK_CONFIG_TESTS 1 00:11:40.879 #undef SPDK_CONFIG_TSAN 00:11:40.879 #define SPDK_CONFIG_UBLK 1 00:11:40.879 #define SPDK_CONFIG_UBSAN 1 00:11:40.879 #undef SPDK_CONFIG_UNIT_TESTS 00:11:40.879 #undef SPDK_CONFIG_URING 00:11:40.879 #define SPDK_CONFIG_URING_PATH 00:11:40.879 #undef SPDK_CONFIG_URING_ZNS 00:11:40.879 #undef SPDK_CONFIG_USDT 00:11:40.879 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:40.879 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:40.879 #define SPDK_CONFIG_VFIO_USER 1 00:11:40.879 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:40.879 #define SPDK_CONFIG_VHOST 1 00:11:40.879 #define SPDK_CONFIG_VIRTIO 1 00:11:40.879 #undef SPDK_CONFIG_VTUNE 00:11:40.879 #define SPDK_CONFIG_VTUNE_DIR 00:11:40.879 #define SPDK_CONFIG_WERROR 1 00:11:40.879 #define SPDK_CONFIG_WPDK_DIR 00:11:40.879 #undef SPDK_CONFIG_XNVME 00:11:40.879 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:40.879 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:40.880 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:41.142 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.143 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 162198 ]] 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 162198 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.CGATc7 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.CGATc7/tests/target /tmp/spdk.CGATc7 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54526693376 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988511744 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7461818368 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984224768 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:41.144 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375273472 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993944576 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994255872 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:41.145 * Looking for test storage... 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54526693376 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9676410880 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.145 --rc genhtml_branch_coverage=1 00:11:41.145 --rc genhtml_function_coverage=1 00:11:41.145 --rc genhtml_legend=1 00:11:41.145 --rc geninfo_all_blocks=1 00:11:41.145 --rc geninfo_unexecuted_blocks=1 00:11:41.145 00:11:41.145 ' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.145 --rc genhtml_branch_coverage=1 00:11:41.145 --rc genhtml_function_coverage=1 00:11:41.145 --rc genhtml_legend=1 00:11:41.145 --rc geninfo_all_blocks=1 00:11:41.145 --rc geninfo_unexecuted_blocks=1 00:11:41.145 00:11:41.145 ' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.145 --rc genhtml_branch_coverage=1 00:11:41.145 --rc genhtml_function_coverage=1 00:11:41.145 --rc genhtml_legend=1 00:11:41.145 --rc geninfo_all_blocks=1 00:11:41.145 --rc geninfo_unexecuted_blocks=1 00:11:41.145 00:11:41.145 ' 00:11:41.145 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.145 --rc genhtml_branch_coverage=1 00:11:41.145 --rc genhtml_function_coverage=1 00:11:41.145 --rc genhtml_legend=1 00:11:41.146 --rc geninfo_all_blocks=1 00:11:41.146 --rc geninfo_unexecuted_blocks=1 00:11:41.146 00:11:41.146 ' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.146 11:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:43.685 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.686 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:43.686 11:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:43.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:11:43.686 00:11:43.686 --- 10.0.0.2 ping statistics --- 00:11:43.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.686 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:11:43.686 00:11:43.686 --- 10.0.0.1 ping statistics --- 00:11:43.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.686 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.686 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.686 ************************************ 00:11:43.686 START TEST nvmf_filesystem_no_in_capsule 00:11:43.686 ************************************ 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=164314 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 164314 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 164314 ']' 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.687 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.687 [2024-11-17 11:06:08.167211] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:11:43.687 [2024-11-17 11:06:08.167287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.687 [2024-11-17 11:06:08.236696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.687 [2024-11-17 11:06:08.282354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.687 [2024-11-17 11:06:08.282413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.687 [2024-11-17 11:06:08.282442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.687 [2024-11-17 11:06:08.282453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.687 [2024-11-17 11:06:08.282462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.687 [2024-11-17 11:06:08.283999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.687 [2024-11-17 11:06:08.284108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.687 [2024-11-17 11:06:08.284183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.687 [2024-11-17 11:06:08.284186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 [2024-11-17 11:06:08.420272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 Malloc1 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.947 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.207 [2024-11-17 11:06:08.606932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:44.207 { 00:11:44.207 "name": "Malloc1", 00:11:44.207 "aliases": [ 00:11:44.207 "b23558be-f65d-4648-91ba-0cc22ad3b553" 00:11:44.207 ], 00:11:44.207 "product_name": "Malloc disk", 00:11:44.207 "block_size": 512, 00:11:44.207 "num_blocks": 1048576, 00:11:44.207 "uuid": "b23558be-f65d-4648-91ba-0cc22ad3b553", 00:11:44.207 "assigned_rate_limits": { 00:11:44.207 "rw_ios_per_sec": 0, 00:11:44.207 "rw_mbytes_per_sec": 0, 00:11:44.207 "r_mbytes_per_sec": 0, 00:11:44.207 "w_mbytes_per_sec": 0 00:11:44.207 }, 00:11:44.207 "claimed": true, 00:11:44.207 "claim_type": "exclusive_write", 00:11:44.207 "zoned": false, 00:11:44.207 "supported_io_types": { 00:11:44.207 "read": true, 00:11:44.207 "write": true, 00:11:44.207 "unmap": true, 00:11:44.207 "flush": true, 00:11:44.207 "reset": true, 00:11:44.207 "nvme_admin": false, 00:11:44.207 "nvme_io": false, 00:11:44.207 "nvme_io_md": false, 00:11:44.207 "write_zeroes": true, 00:11:44.207 "zcopy": true, 00:11:44.207 "get_zone_info": false, 00:11:44.207 "zone_management": false, 00:11:44.207 "zone_append": false, 00:11:44.207 "compare": false, 00:11:44.207 "compare_and_write": false, 00:11:44.207 "abort": true, 00:11:44.207 "seek_hole": false, 00:11:44.207 "seek_data": false, 00:11:44.207 "copy": true, 00:11:44.207 "nvme_iov_md": false 00:11:44.207 }, 00:11:44.207 "memory_domains": [ 00:11:44.207 { 00:11:44.207 "dma_device_id": "system", 00:11:44.207 "dma_device_type": 1 00:11:44.207 }, 00:11:44.207 { 00:11:44.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.207 "dma_device_type": 2 00:11:44.207 } 00:11:44.207 ], 00:11:44.207 "driver_specific": {} 00:11:44.207 } 00:11:44.207 ]' 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:44.207 11:06:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.773 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.773 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:44.773 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.773 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:44.773 11:06:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.681 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:46.682 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:46.943 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:46.943 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.205 11:06:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:47.775 11:06:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 ************************************ 00:11:48.719 START TEST filesystem_ext4 00:11:48.719 ************************************ 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:48.719 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:48.719 mke2fs 1.47.0 (5-Feb-2023) 00:11:48.719 Discarding device blocks: 0/522240 done 00:11:48.719 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:48.719 Filesystem UUID: a8586750-fca6-4e3c-8e38-b26599dd158b 00:11:48.719 Superblock backups stored on blocks: 00:11:48.719 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:48.719 00:11:48.719 Allocating group tables: 0/64 done 00:11:48.719 Writing inode tables: 0/64 done 00:11:49.290 Creating journal (8192 blocks): done 00:11:49.290 Writing superblocks and filesystem accounting information: 0/64 done 00:11:49.290 00:11:49.290 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:49.290 11:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.878 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.878 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:55.878 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.878 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:55.878 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:55.878 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 164314 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.879 00:11:55.879 real 0m6.518s 00:11:55.879 user 0m0.018s 00:11:55.879 sys 0m0.103s 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:55.879 ************************************ 00:11:55.879 END TEST filesystem_ext4 00:11:55.879 ************************************ 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.879 ************************************ 00:11:55.879 START TEST filesystem_btrfs 00:11:55.879 ************************************ 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:55.879 11:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:55.879 btrfs-progs v6.8.1 00:11:55.879 See https://btrfs.readthedocs.io for more information. 00:11:55.879 00:11:55.879 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:55.879 NOTE: several default settings have changed in version 5.15, please make sure 00:11:55.879 this does not affect your deployments: 00:11:55.879 - DUP for metadata (-m dup) 00:11:55.879 - enabled no-holes (-O no-holes) 00:11:55.879 - enabled free-space-tree (-R free-space-tree) 00:11:55.879 00:11:55.879 Label: (null) 00:11:55.879 UUID: d109d50b-e98d-4e46-a7dc-8132d4d7ef89 00:11:55.879 Node size: 16384 00:11:55.879 Sector size: 4096 (CPU page size: 4096) 00:11:55.879 Filesystem size: 510.00MiB 00:11:55.879 Block group profiles: 00:11:55.879 Data: single 8.00MiB 00:11:55.879 Metadata: DUP 32.00MiB 00:11:55.879 System: DUP 8.00MiB 00:11:55.879 SSD detected: yes 00:11:55.879 Zoned device: no 00:11:55.879 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:55.879 Checksum: crc32c 00:11:55.879 Number of devices: 1 00:11:55.879 Devices: 00:11:55.879 ID SIZE PATH 00:11:55.879 1 510.00MiB /dev/nvme0n1p1 00:11:55.879 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 164314 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.879 00:11:55.879 real 0m0.761s 00:11:55.879 user 0m0.019s 00:11:55.879 sys 0m0.134s 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.879 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:55.879 ************************************ 00:11:55.879 END TEST filesystem_btrfs 00:11:55.879 ************************************ 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.140 ************************************ 00:11:56.140 START TEST filesystem_xfs 00:11:56.140 ************************************ 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:56.140 11:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:56.140 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:56.140 = sectsz=512 attr=2, projid32bit=1 00:11:56.140 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:56.140 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:56.140 data = bsize=4096 blocks=130560, imaxpct=25 00:11:56.140 = sunit=0 swidth=0 blks 00:11:56.140 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:56.140 log =internal log bsize=4096 blocks=16384, version=2 00:11:56.140 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:56.140 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:57.080 Discarding blocks...Done. 00:11:57.080 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:57.080 11:06:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 164314 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.624 00:11:59.624 real 0m3.191s 00:11:59.624 user 0m0.022s 00:11:59.624 sys 0m0.088s 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.624 ************************************ 00:11:59.624 END TEST filesystem_xfs 00:11:59.624 ************************************ 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 164314 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 164314 ']' 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 164314 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.624 11:06:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164314 00:11:59.624 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.624 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.624 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164314' 00:11:59.624 killing process with pid 164314 00:11:59.624 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 164314 00:11:59.624 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 164314 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:59.884 00:11:59.884 real 0m16.319s 00:11:59.884 user 1m3.157s 00:11:59.884 sys 0m2.254s 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.884 ************************************ 00:11:59.884 END TEST nvmf_filesystem_no_in_capsule 00:11:59.884 ************************************ 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.884 ************************************ 00:11:59.884 START TEST nvmf_filesystem_in_capsule 00:11:59.884 ************************************ 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=166563 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 166563 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 166563 ']' 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:59.884 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.145 [2024-11-17 11:06:24.548385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:00.145 [2024-11-17 11:06:24.548460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.145 [2024-11-17 11:06:24.629724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.145 [2024-11-17 11:06:24.677768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.145 [2024-11-17 11:06:24.677846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.145 [2024-11-17 11:06:24.677860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.145 [2024-11-17 11:06:24.677878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.145 [2024-11-17 11:06:24.677888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.145 [2024-11-17 11:06:24.679354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.145 [2024-11-17 11:06:24.679419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.145 [2024-11-17 11:06:24.679483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.145 [2024-11-17 11:06:24.679486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.145 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.145 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:00.145 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.145 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.145 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 [2024-11-17 11:06:24.828969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 Malloc1 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 11:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.404 [2024-11-17 11:06:25.013672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.404 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.405 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.405 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:00.405 { 00:12:00.405 "name": "Malloc1", 00:12:00.405 "aliases": [ 00:12:00.405 "1aa9bffb-57f4-4386-a88d-33cbdd51797e" 00:12:00.405 ], 00:12:00.405 "product_name": "Malloc disk", 00:12:00.405 "block_size": 512, 00:12:00.405 "num_blocks": 1048576, 00:12:00.405 "uuid": "1aa9bffb-57f4-4386-a88d-33cbdd51797e", 00:12:00.405 "assigned_rate_limits": { 00:12:00.405 "rw_ios_per_sec": 0, 00:12:00.405 "rw_mbytes_per_sec": 0, 00:12:00.405 "r_mbytes_per_sec": 0, 00:12:00.405 "w_mbytes_per_sec": 0 00:12:00.405 }, 00:12:00.405 "claimed": true, 00:12:00.405 "claim_type": "exclusive_write", 00:12:00.405 "zoned": false, 00:12:00.405 "supported_io_types": { 00:12:00.405 "read": true, 00:12:00.405 "write": true, 00:12:00.405 "unmap": true, 00:12:00.405 "flush": true, 00:12:00.405 "reset": true, 00:12:00.405 "nvme_admin": false, 00:12:00.405 "nvme_io": false, 00:12:00.405 "nvme_io_md": false, 00:12:00.405 "write_zeroes": true, 00:12:00.405 "zcopy": true, 00:12:00.405 "get_zone_info": false, 00:12:00.405 "zone_management": false, 00:12:00.405 "zone_append": false, 00:12:00.405 "compare": false, 00:12:00.405 "compare_and_write": false, 00:12:00.405 "abort": true, 00:12:00.405 "seek_hole": false, 00:12:00.405 "seek_data": false, 00:12:00.405 "copy": true, 00:12:00.405 "nvme_iov_md": false 00:12:00.405 }, 00:12:00.405 "memory_domains": [ 00:12:00.405 { 00:12:00.405 "dma_device_id": "system", 00:12:00.405 "dma_device_type": 1 00:12:00.405 }, 00:12:00.405 { 00:12:00.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.405 "dma_device_type": 2 00:12:00.405 } 00:12:00.405 ], 00:12:00.405 "driver_specific": {} 00:12:00.405 } 00:12:00.405 ]' 00:12:00.405 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:00.665 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:01.237 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:01.237 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:01.237 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.237 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:01.237 11:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:03.142 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:03.142 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:03.142 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:03.401 11:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:03.401 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:04.339 11:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:05.717 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:05.717 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:05.717 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:05.717 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.717 11:06:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.717 ************************************ 00:12:05.717 START TEST filesystem_in_capsule_ext4 00:12:05.717 ************************************ 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:05.717 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:05.718 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:05.718 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:05.718 mke2fs 1.47.0 (5-Feb-2023) 00:12:05.718 Discarding device blocks: 0/522240 done 00:12:05.718 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:05.718 Filesystem UUID: 50d60259-2753-45b4-bfeb-f81442683d9d 00:12:05.718 Superblock backups stored on blocks: 00:12:05.718 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:05.718 00:12:05.718 Allocating group tables: 0/64 done 00:12:05.718 Writing inode tables: 0/64 done 00:12:06.294 Creating journal (8192 blocks): done 00:12:06.294 Writing superblocks and filesystem accounting information: 0/64 done 00:12:06.294 00:12:06.294 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:06.294 11:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.877 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.877 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.877 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 166563 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.878 00:12:12.878 real 0m6.348s 00:12:12.878 user 0m0.025s 00:12:12.878 sys 0m0.059s 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.878 ************************************ 00:12:12.878 END TEST filesystem_in_capsule_ext4 00:12:12.878 ************************************ 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.878 ************************************ 00:12:12.878 START TEST filesystem_in_capsule_btrfs 00:12:12.878 ************************************ 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:12.878 btrfs-progs v6.8.1 00:12:12.878 See https://btrfs.readthedocs.io for more information. 00:12:12.878 00:12:12.878 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:12.878 NOTE: several default settings have changed in version 5.15, please make sure 00:12:12.878 this does not affect your deployments: 00:12:12.878 - DUP for metadata (-m dup) 00:12:12.878 - enabled no-holes (-O no-holes) 00:12:12.878 - enabled free-space-tree (-R free-space-tree) 00:12:12.878 00:12:12.878 Label: (null) 00:12:12.878 UUID: 2696ac11-947f-4e68-a04e-7648c613f38f 00:12:12.878 Node size: 16384 00:12:12.878 Sector size: 4096 (CPU page size: 4096) 00:12:12.878 Filesystem size: 510.00MiB 00:12:12.878 Block group profiles: 00:12:12.878 Data: single 8.00MiB 00:12:12.878 Metadata: DUP 32.00MiB 00:12:12.878 System: DUP 8.00MiB 00:12:12.878 SSD detected: yes 00:12:12.878 Zoned device: no 00:12:12.878 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:12.878 Checksum: crc32c 00:12:12.878 Number of devices: 1 00:12:12.878 Devices: 00:12:12.878 ID SIZE PATH 00:12:12.878 1 510.00MiB /dev/nvme0n1p1 00:12:12.878 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:12.878 11:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.878 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.878 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:12.878 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.878 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:12.878 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:12.878 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.139 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 166563 00:12:13.139 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.139 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.139 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.139 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.139 00:12:13.139 real 0m1.161s 00:12:13.139 user 0m0.021s 00:12:13.139 sys 0m0.095s 00:12:13.139 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.140 ************************************ 00:12:13.140 END TEST filesystem_in_capsule_btrfs 00:12:13.140 ************************************ 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.140 ************************************ 00:12:13.140 START TEST filesystem_in_capsule_xfs 00:12:13.140 ************************************ 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:13.140 11:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:13.140 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:13.140 = sectsz=512 attr=2, projid32bit=1 00:12:13.140 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:13.140 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:13.140 data = bsize=4096 blocks=130560, imaxpct=25 00:12:13.140 = sunit=0 swidth=0 blks 00:12:13.140 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:13.140 log =internal log bsize=4096 blocks=16384, version=2 00:12:13.140 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:13.140 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:14.080 Discarding blocks...Done. 00:12:14.080 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:14.080 11:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 166563 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.991 00:12:15.991 real 0m2.716s 00:12:15.991 user 0m0.018s 00:12:15.991 sys 0m0.059s 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.991 ************************************ 00:12:15.991 END TEST filesystem_in_capsule_xfs 00:12:15.991 ************************************ 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:15.991 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 166563 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 166563 ']' 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 166563 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 166563 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 166563' 00:12:16.250 killing process with pid 166563 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 166563 00:12:16.250 11:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 166563 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:16.511 00:12:16.511 real 0m16.620s 00:12:16.511 user 1m4.392s 00:12:16.511 sys 0m2.096s 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.511 ************************************ 00:12:16.511 END TEST nvmf_filesystem_in_capsule 00:12:16.511 ************************************ 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.511 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.511 rmmod nvme_tcp 00:12:16.511 rmmod nvme_fabrics 00:12:16.771 rmmod nvme_keyring 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.771 11:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.733 00:12:18.733 real 0m37.915s 00:12:18.733 user 2m8.720s 00:12:18.733 sys 0m6.159s 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.733 ************************************ 00:12:18.733 END TEST nvmf_filesystem 00:12:18.733 ************************************ 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.733 ************************************ 00:12:18.733 START TEST nvmf_target_discovery 00:12:18.733 ************************************ 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.733 * Looking for test storage... 00:12:18.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:18.733 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.993 --rc genhtml_legend=1 00:12:18.993 --rc geninfo_all_blocks=1 00:12:18.993 --rc geninfo_unexecuted_blocks=1 00:12:18.993 00:12:18.993 ' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.993 --rc genhtml_legend=1 00:12:18.993 --rc geninfo_all_blocks=1 00:12:18.993 --rc geninfo_unexecuted_blocks=1 00:12:18.993 00:12:18.993 ' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.993 --rc genhtml_legend=1 00:12:18.993 --rc geninfo_all_blocks=1 00:12:18.993 --rc geninfo_unexecuted_blocks=1 00:12:18.993 00:12:18.993 ' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.993 --rc genhtml_legend=1 00:12:18.993 --rc geninfo_all_blocks=1 00:12:18.993 --rc geninfo_unexecuted_blocks=1 00:12:18.993 00:12:18.993 ' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.993 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.994 11:06:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:21.532 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:21.532 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:21.532 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:21.532 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.532 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:12:21.533 00:12:21.533 --- 10.0.0.2 ping statistics --- 00:12:21.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.533 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:12:21.533 00:12:21.533 --- 10.0.0.1 ping statistics --- 00:12:21.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.533 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=170713 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 170713 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 170713 ']' 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.533 11:06:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 [2024-11-17 11:06:45.824070] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:21.533 [2024-11-17 11:06:45.824150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.533 [2024-11-17 11:06:45.893435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.533 [2024-11-17 11:06:45.939862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.533 [2024-11-17 11:06:45.939924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.533 [2024-11-17 11:06:45.939951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.533 [2024-11-17 11:06:45.939962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.533 [2024-11-17 11:06:45.939972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.533 [2024-11-17 11:06:45.941615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.533 [2024-11-17 11:06:45.941649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.533 [2024-11-17 11:06:45.941672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.533 [2024-11-17 11:06:45.941676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 [2024-11-17 11:06:46.076680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 Null1 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 [2024-11-17 11:06:46.121038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 Null2 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 Null3 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:21.533 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.534 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.794 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.794 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:21.794 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:21.794 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.794 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.794 Null4 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:21.795 00:12:21.795 Discovery Log Number of Records 6, Generation counter 6 00:12:21.795 =====Discovery Log Entry 0====== 00:12:21.795 trtype: tcp 00:12:21.795 adrfam: ipv4 00:12:21.795 subtype: current discovery subsystem 00:12:21.795 treq: not required 00:12:21.795 portid: 0 00:12:21.795 trsvcid: 4420 00:12:21.795 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.795 traddr: 10.0.0.2 00:12:21.795 eflags: explicit discovery connections, duplicate discovery information 00:12:21.795 sectype: none 00:12:21.795 =====Discovery Log Entry 1====== 00:12:21.795 trtype: tcp 00:12:21.795 adrfam: ipv4 00:12:21.795 subtype: nvme subsystem 00:12:21.795 treq: not required 00:12:21.795 portid: 0 00:12:21.795 trsvcid: 4420 00:12:21.795 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:21.795 traddr: 10.0.0.2 00:12:21.795 eflags: none 00:12:21.795 sectype: none 00:12:21.795 =====Discovery Log Entry 2====== 00:12:21.795 trtype: tcp 00:12:21.795 adrfam: ipv4 00:12:21.795 subtype: nvme subsystem 00:12:21.795 treq: not required 00:12:21.795 portid: 0 00:12:21.795 trsvcid: 4420 00:12:21.795 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:21.795 traddr: 10.0.0.2 00:12:21.795 eflags: none 00:12:21.795 sectype: none 00:12:21.795 =====Discovery Log Entry 3====== 00:12:21.795 trtype: tcp 00:12:21.795 adrfam: ipv4 00:12:21.795 subtype: nvme subsystem 00:12:21.795 treq: not required 00:12:21.795 portid: 0 00:12:21.795 trsvcid: 4420 00:12:21.795 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:21.795 traddr: 10.0.0.2 00:12:21.795 eflags: none 00:12:21.795 sectype: none 00:12:21.795 =====Discovery Log Entry 4====== 00:12:21.795 trtype: tcp 00:12:21.795 adrfam: ipv4 00:12:21.795 subtype: nvme subsystem 00:12:21.795 treq: not required 00:12:21.795 portid: 0 00:12:21.795 trsvcid: 4420 00:12:21.795 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:21.795 traddr: 10.0.0.2 00:12:21.795 eflags: none 00:12:21.795 sectype: none 00:12:21.795 =====Discovery Log Entry 5====== 00:12:21.795 trtype: tcp 00:12:21.795 adrfam: ipv4 00:12:21.795 subtype: discovery subsystem referral 00:12:21.795 treq: not required 00:12:21.795 portid: 0 00:12:21.795 trsvcid: 4430 00:12:21.795 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:21.795 traddr: 10.0.0.2 00:12:21.795 eflags: none 00:12:21.795 sectype: none 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:21.795 Perform nvmf subsystem discovery via RPC 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.795 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.055 [ 00:12:22.055 { 00:12:22.055 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:22.055 "subtype": "Discovery", 00:12:22.055 "listen_addresses": [ 00:12:22.055 { 00:12:22.055 "trtype": "TCP", 00:12:22.055 "adrfam": "IPv4", 00:12:22.055 "traddr": "10.0.0.2", 00:12:22.055 "trsvcid": "4420" 00:12:22.055 } 00:12:22.055 ], 00:12:22.055 "allow_any_host": true, 00:12:22.055 "hosts": [] 00:12:22.055 }, 00:12:22.055 { 00:12:22.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.055 "subtype": "NVMe", 00:12:22.055 "listen_addresses": [ 00:12:22.055 { 00:12:22.055 "trtype": "TCP", 00:12:22.055 "adrfam": "IPv4", 00:12:22.055 "traddr": "10.0.0.2", 00:12:22.055 "trsvcid": "4420" 00:12:22.055 } 00:12:22.055 ], 00:12:22.055 "allow_any_host": true, 00:12:22.055 "hosts": [], 00:12:22.055 "serial_number": "SPDK00000000000001", 00:12:22.055 "model_number": "SPDK bdev Controller", 00:12:22.055 "max_namespaces": 32, 00:12:22.055 "min_cntlid": 1, 00:12:22.055 "max_cntlid": 65519, 00:12:22.055 "namespaces": [ 00:12:22.055 { 00:12:22.055 "nsid": 1, 00:12:22.055 "bdev_name": "Null1", 00:12:22.055 "name": "Null1", 00:12:22.055 "nguid": "4D2F60DB93F04A42BF1A43841C84B73F", 00:12:22.055 "uuid": "4d2f60db-93f0-4a42-bf1a-43841c84b73f" 00:12:22.055 } 00:12:22.055 ] 00:12:22.055 }, 00:12:22.055 { 00:12:22.055 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:22.055 "subtype": "NVMe", 00:12:22.055 "listen_addresses": [ 00:12:22.055 { 00:12:22.055 "trtype": "TCP", 00:12:22.055 "adrfam": "IPv4", 00:12:22.055 "traddr": "10.0.0.2", 00:12:22.055 "trsvcid": "4420" 00:12:22.055 } 00:12:22.055 ], 00:12:22.055 "allow_any_host": true, 00:12:22.055 "hosts": [], 00:12:22.055 "serial_number": "SPDK00000000000002", 00:12:22.055 "model_number": "SPDK bdev Controller", 00:12:22.055 "max_namespaces": 32, 00:12:22.055 "min_cntlid": 1, 00:12:22.055 "max_cntlid": 65519, 00:12:22.055 "namespaces": [ 00:12:22.055 { 00:12:22.055 "nsid": 1, 00:12:22.055 "bdev_name": "Null2", 00:12:22.055 "name": "Null2", 00:12:22.055 "nguid": "0E5F56BA3B1F4C66B7A6AE1292F1929D", 00:12:22.055 "uuid": "0e5f56ba-3b1f-4c66-b7a6-ae1292f1929d" 00:12:22.055 } 00:12:22.055 ] 00:12:22.055 }, 00:12:22.055 { 00:12:22.055 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:22.055 "subtype": "NVMe", 00:12:22.055 "listen_addresses": [ 00:12:22.055 { 00:12:22.055 "trtype": "TCP", 00:12:22.055 "adrfam": "IPv4", 00:12:22.055 "traddr": "10.0.0.2", 00:12:22.055 "trsvcid": "4420" 00:12:22.055 } 00:12:22.055 ], 00:12:22.055 "allow_any_host": true, 00:12:22.055 "hosts": [], 00:12:22.055 "serial_number": "SPDK00000000000003", 00:12:22.055 "model_number": "SPDK bdev Controller", 00:12:22.055 "max_namespaces": 32, 00:12:22.055 "min_cntlid": 1, 00:12:22.055 "max_cntlid": 65519, 00:12:22.055 "namespaces": [ 00:12:22.055 { 00:12:22.055 "nsid": 1, 00:12:22.055 "bdev_name": "Null3", 00:12:22.055 "name": "Null3", 00:12:22.055 "nguid": "7A6D67B482034A35A221460F29CD7429", 00:12:22.055 "uuid": "7a6d67b4-8203-4a35-a221-460f29cd7429" 00:12:22.055 } 00:12:22.055 ] 00:12:22.055 }, 00:12:22.055 { 00:12:22.055 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:22.055 "subtype": "NVMe", 00:12:22.055 "listen_addresses": [ 00:12:22.055 { 00:12:22.055 "trtype": "TCP", 00:12:22.055 "adrfam": "IPv4", 00:12:22.055 "traddr": "10.0.0.2", 00:12:22.055 "trsvcid": "4420" 00:12:22.055 } 00:12:22.055 ], 00:12:22.055 "allow_any_host": true, 00:12:22.055 "hosts": [], 00:12:22.055 "serial_number": "SPDK00000000000004", 00:12:22.055 "model_number": "SPDK bdev Controller", 00:12:22.055 "max_namespaces": 32, 00:12:22.055 "min_cntlid": 1, 00:12:22.055 "max_cntlid": 65519, 00:12:22.055 "namespaces": [ 00:12:22.055 { 00:12:22.055 "nsid": 1, 00:12:22.055 "bdev_name": "Null4", 00:12:22.055 "name": "Null4", 00:12:22.055 "nguid": "124F57D3B70D45F6B81F8D5D27DAAE85", 00:12:22.055 "uuid": "124f57d3-b70d-45f6-b81f-8d5d27daae85" 00:12:22.055 } 00:12:22.055 ] 00:12:22.055 } 00:12:22.055 ] 00:12:22.055 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.055 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:22.055 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.055 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.055 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.056 rmmod nvme_tcp 00:12:22.056 rmmod nvme_fabrics 00:12:22.056 rmmod nvme_keyring 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 170713 ']' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 170713 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 170713 ']' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 170713 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 170713 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 170713' 00:12:22.056 killing process with pid 170713 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 170713 00:12:22.056 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 170713 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.316 11:06:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.862 00:12:24.862 real 0m5.662s 00:12:24.862 user 0m4.764s 00:12:24.862 sys 0m2.004s 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.862 ************************************ 00:12:24.862 END TEST nvmf_target_discovery 00:12:24.862 ************************************ 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.862 11:06:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.862 ************************************ 00:12:24.862 START TEST nvmf_referrals 00:12:24.862 ************************************ 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:24.862 * Looking for test storage... 00:12:24.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:24.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.862 --rc genhtml_branch_coverage=1 00:12:24.862 --rc genhtml_function_coverage=1 00:12:24.862 --rc genhtml_legend=1 00:12:24.862 --rc geninfo_all_blocks=1 00:12:24.862 --rc geninfo_unexecuted_blocks=1 00:12:24.862 00:12:24.862 ' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:24.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.862 --rc genhtml_branch_coverage=1 00:12:24.862 --rc genhtml_function_coverage=1 00:12:24.862 --rc genhtml_legend=1 00:12:24.862 --rc geninfo_all_blocks=1 00:12:24.862 --rc geninfo_unexecuted_blocks=1 00:12:24.862 00:12:24.862 ' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:24.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.862 --rc genhtml_branch_coverage=1 00:12:24.862 --rc genhtml_function_coverage=1 00:12:24.862 --rc genhtml_legend=1 00:12:24.862 --rc geninfo_all_blocks=1 00:12:24.862 --rc geninfo_unexecuted_blocks=1 00:12:24.862 00:12:24.862 ' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:24.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.862 --rc genhtml_branch_coverage=1 00:12:24.862 --rc genhtml_function_coverage=1 00:12:24.862 --rc genhtml_legend=1 00:12:24.862 --rc geninfo_all_blocks=1 00:12:24.862 --rc geninfo_unexecuted_blocks=1 00:12:24.862 00:12:24.862 ' 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.862 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.863 11:06:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.770 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.770 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.770 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.770 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.770 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:27.032 00:12:27.032 --- 10.0.0.2 ping statistics --- 00:12:27.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.032 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:27.032 00:12:27.032 --- 10.0.0.1 ping statistics --- 00:12:27.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.032 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=172817 00:12:27.032 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 172817 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 172817 ']' 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.033 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.033 [2024-11-17 11:06:51.545093] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:27.033 [2024-11-17 11:06:51.545199] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.033 [2024-11-17 11:06:51.618167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.033 [2024-11-17 11:06:51.667402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.033 [2024-11-17 11:06:51.667469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.033 [2024-11-17 11:06:51.667483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.033 [2024-11-17 11:06:51.667495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.033 [2024-11-17 11:06:51.667505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.033 [2024-11-17 11:06:51.669186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.033 [2024-11-17 11:06:51.669252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.033 [2024-11-17 11:06:51.669321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.033 [2024-11-17 11:06:51.669324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.292 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 [2024-11-17 11:06:51.819182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 [2024-11-17 11:06:51.831469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.293 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.552 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.552 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.552 11:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.552 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.811 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.070 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:28.329 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:28.329 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:28.329 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:28.329 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:28.329 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.329 11:06:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.588 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.847 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.106 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.365 rmmod nvme_tcp 00:12:29.365 rmmod nvme_fabrics 00:12:29.365 rmmod nvme_keyring 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 172817 ']' 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 172817 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 172817 ']' 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 172817 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.365 11:06:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 172817 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 172817' 00:12:29.625 killing process with pid 172817 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 172817 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 172817 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.625 11:06:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.173 00:12:32.173 real 0m7.269s 00:12:32.173 user 0m11.744s 00:12:32.173 sys 0m2.352s 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.173 ************************************ 00:12:32.173 END TEST nvmf_referrals 00:12:32.173 ************************************ 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.173 ************************************ 00:12:32.173 START TEST nvmf_connect_disconnect 00:12:32.173 ************************************ 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.173 * Looking for test storage... 00:12:32.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.173 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.173 --rc genhtml_branch_coverage=1 00:12:32.173 --rc genhtml_function_coverage=1 00:12:32.173 --rc genhtml_legend=1 00:12:32.173 --rc geninfo_all_blocks=1 00:12:32.173 --rc geninfo_unexecuted_blocks=1 00:12:32.173 00:12:32.173 ' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.174 --rc genhtml_branch_coverage=1 00:12:32.174 --rc genhtml_function_coverage=1 00:12:32.174 --rc genhtml_legend=1 00:12:32.174 --rc geninfo_all_blocks=1 00:12:32.174 --rc geninfo_unexecuted_blocks=1 00:12:32.174 00:12:32.174 ' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.174 --rc genhtml_branch_coverage=1 00:12:32.174 --rc genhtml_function_coverage=1 00:12:32.174 --rc genhtml_legend=1 00:12:32.174 --rc geninfo_all_blocks=1 00:12:32.174 --rc geninfo_unexecuted_blocks=1 00:12:32.174 00:12:32.174 ' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.174 --rc genhtml_branch_coverage=1 00:12:32.174 --rc genhtml_function_coverage=1 00:12:32.174 --rc genhtml_legend=1 00:12:32.174 --rc geninfo_all_blocks=1 00:12:32.174 --rc geninfo_unexecuted_blocks=1 00:12:32.174 00:12:32.174 ' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.174 11:06:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.086 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:34.087 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:34.087 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:34.087 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:34.087 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.087 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.347 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.347 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.347 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.347 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:12:34.347 00:12:34.347 --- 10.0.0.2 ping statistics --- 00:12:34.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.347 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:12:34.347 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:12:34.347 00:12:34.347 --- 10.0.0.1 ping statistics --- 00:12:34.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.348 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=175120 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 175120 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 175120 ']' 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.348 11:06:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.348 [2024-11-17 11:06:58.866433] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:12:34.348 [2024-11-17 11:06:58.866511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.348 [2024-11-17 11:06:58.935509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.348 [2024-11-17 11:06:58.979239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.348 [2024-11-17 11:06:58.979317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.348 [2024-11-17 11:06:58.979330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.348 [2024-11-17 11:06:58.979341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.348 [2024-11-17 11:06:58.979365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.348 [2024-11-17 11:06:58.980966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.348 [2024-11-17 11:06:58.981031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.348 [2024-11-17 11:06:58.981120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.348 [2024-11-17 11:06:58.981123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.607 [2024-11-17 11:06:59.122625] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.607 [2024-11-17 11:06:59.188017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:34.607 11:06:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:37.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.410 rmmod nvme_tcp 00:16:25.410 rmmod nvme_fabrics 00:16:25.410 rmmod nvme_keyring 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 175120 ']' 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 175120 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 175120 ']' 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 175120 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175120 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175120' 00:16:25.410 killing process with pid 175120 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 175120 00:16:25.410 11:10:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 175120 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.668 11:10:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.579 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:27.579 00:16:27.579 real 3m55.882s 00:16:27.579 user 14m57.604s 00:16:27.579 sys 0m35.988s 00:16:27.579 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.579 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.579 ************************************ 00:16:27.579 END TEST nvmf_connect_disconnect 00:16:27.579 ************************************ 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.838 ************************************ 00:16:27.838 START TEST nvmf_multitarget 00:16:27.838 ************************************ 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:27.838 * Looking for test storage... 00:16:27.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:27.838 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.839 --rc genhtml_branch_coverage=1 00:16:27.839 --rc genhtml_function_coverage=1 00:16:27.839 --rc genhtml_legend=1 00:16:27.839 --rc geninfo_all_blocks=1 00:16:27.839 --rc geninfo_unexecuted_blocks=1 00:16:27.839 00:16:27.839 ' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.839 --rc genhtml_branch_coverage=1 00:16:27.839 --rc genhtml_function_coverage=1 00:16:27.839 --rc genhtml_legend=1 00:16:27.839 --rc geninfo_all_blocks=1 00:16:27.839 --rc geninfo_unexecuted_blocks=1 00:16:27.839 00:16:27.839 ' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.839 --rc genhtml_branch_coverage=1 00:16:27.839 --rc genhtml_function_coverage=1 00:16:27.839 --rc genhtml_legend=1 00:16:27.839 --rc geninfo_all_blocks=1 00:16:27.839 --rc geninfo_unexecuted_blocks=1 00:16:27.839 00:16:27.839 ' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.839 --rc genhtml_branch_coverage=1 00:16:27.839 --rc genhtml_function_coverage=1 00:16:27.839 --rc genhtml_legend=1 00:16:27.839 --rc geninfo_all_blocks=1 00:16:27.839 --rc geninfo_unexecuted_blocks=1 00:16:27.839 00:16:27.839 ' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:27.839 11:10:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.376 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.376 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.377 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:30.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:16:30.377 00:16:30.377 --- 10.0.0.2 ping statistics --- 00:16:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.377 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:16:30.377 00:16:30.377 --- 10.0.0.1 ping statistics --- 00:16:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.377 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=206286 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 206286 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 206286 ']' 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.377 11:10:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.377 [2024-11-17 11:10:54.799418] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:30.377 [2024-11-17 11:10:54.799533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.377 [2024-11-17 11:10:54.874871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.377 [2024-11-17 11:10:54.925260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.377 [2024-11-17 11:10:54.925323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.377 [2024-11-17 11:10:54.925352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.377 [2024-11-17 11:10:54.925363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.378 [2024-11-17 11:10:54.925374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.378 [2024-11-17 11:10:54.927061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.378 [2024-11-17 11:10:54.927125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.378 [2024-11-17 11:10:54.927147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.378 [2024-11-17 11:10:54.927152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:30.636 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:30.895 "nvmf_tgt_1" 00:16:30.895 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:30.895 "nvmf_tgt_2" 00:16:30.895 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:30.895 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:30.895 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:30.895 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:31.153 true 00:16:31.153 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:31.153 true 00:16:31.153 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:31.153 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:31.414 rmmod nvme_tcp 00:16:31.414 rmmod nvme_fabrics 00:16:31.414 rmmod nvme_keyring 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 206286 ']' 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 206286 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 206286 ']' 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 206286 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.414 11:10:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206286 00:16:31.414 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.414 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.414 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206286' 00:16:31.414 killing process with pid 206286 00:16:31.414 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 206286 00:16:31.414 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 206286 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.674 11:10:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:34.217 00:16:34.217 real 0m6.003s 00:16:34.217 user 0m6.832s 00:16:34.217 sys 0m2.112s 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.217 ************************************ 00:16:34.217 END TEST nvmf_multitarget 00:16:34.217 ************************************ 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:34.217 ************************************ 00:16:34.217 START TEST nvmf_rpc 00:16:34.217 ************************************ 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:34.217 * Looking for test storage... 00:16:34.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:34.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.217 --rc genhtml_branch_coverage=1 00:16:34.217 --rc genhtml_function_coverage=1 00:16:34.217 --rc genhtml_legend=1 00:16:34.217 --rc geninfo_all_blocks=1 00:16:34.217 --rc geninfo_unexecuted_blocks=1 00:16:34.217 00:16:34.217 ' 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:34.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.217 --rc genhtml_branch_coverage=1 00:16:34.217 --rc genhtml_function_coverage=1 00:16:34.217 --rc genhtml_legend=1 00:16:34.217 --rc geninfo_all_blocks=1 00:16:34.217 --rc geninfo_unexecuted_blocks=1 00:16:34.217 00:16:34.217 ' 00:16:34.217 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:34.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.217 --rc genhtml_branch_coverage=1 00:16:34.217 --rc genhtml_function_coverage=1 00:16:34.217 --rc genhtml_legend=1 00:16:34.217 --rc geninfo_all_blocks=1 00:16:34.217 --rc geninfo_unexecuted_blocks=1 00:16:34.217 00:16:34.218 ' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:34.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.218 --rc genhtml_branch_coverage=1 00:16:34.218 --rc genhtml_function_coverage=1 00:16:34.218 --rc genhtml_legend=1 00:16:34.218 --rc geninfo_all_blocks=1 00:16:34.218 --rc geninfo_unexecuted_blocks=1 00:16:34.218 00:16:34.218 ' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:34.218 11:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.122 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.122 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:16:36.123 00:16:36.123 --- 10.0.0.2 ping statistics --- 00:16:36.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.123 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:16:36.123 00:16:36.123 --- 10.0.0.1 ping statistics --- 00:16:36.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.123 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.123 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=208392 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 208392 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 208392 ']' 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.124 11:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.383 [2024-11-17 11:11:00.820063] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:16:36.383 [2024-11-17 11:11:00.820140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.383 [2024-11-17 11:11:00.893097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.383 [2024-11-17 11:11:00.938651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.383 [2024-11-17 11:11:00.938714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.383 [2024-11-17 11:11:00.938750] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.383 [2024-11-17 11:11:00.938762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.383 [2024-11-17 11:11:00.938772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.383 [2024-11-17 11:11:00.940341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.383 [2024-11-17 11:11:00.940448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.383 [2024-11-17 11:11:00.940604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.383 [2024-11-17 11:11:00.940608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:36.641 "tick_rate": 2700000000, 00:16:36.641 "poll_groups": [ 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_000", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [] 00:16:36.641 }, 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_001", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [] 00:16:36.641 }, 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_002", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [] 00:16:36.641 }, 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_003", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [] 00:16:36.641 } 00:16:36.641 ] 00:16:36.641 }' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 [2024-11-17 11:11:01.194594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:36.641 "tick_rate": 2700000000, 00:16:36.641 "poll_groups": [ 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_000", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [ 00:16:36.641 { 00:16:36.641 "trtype": "TCP" 00:16:36.641 } 00:16:36.641 ] 00:16:36.641 }, 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_001", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [ 00:16:36.641 { 00:16:36.641 "trtype": "TCP" 00:16:36.641 } 00:16:36.641 ] 00:16:36.641 }, 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_002", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [ 00:16:36.641 { 00:16:36.641 "trtype": "TCP" 00:16:36.641 } 00:16:36.641 ] 00:16:36.641 }, 00:16:36.641 { 00:16:36.641 "name": "nvmf_tgt_poll_group_003", 00:16:36.641 "admin_qpairs": 0, 00:16:36.641 "io_qpairs": 0, 00:16:36.641 "current_admin_qpairs": 0, 00:16:36.641 "current_io_qpairs": 0, 00:16:36.641 "pending_bdev_io": 0, 00:16:36.641 "completed_nvme_io": 0, 00:16:36.641 "transports": [ 00:16:36.641 { 00:16:36.641 "trtype": "TCP" 00:16:36.641 } 00:16:36.641 ] 00:16:36.641 } 00:16:36.641 ] 00:16:36.641 }' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:36.641 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 Malloc1 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 [2024-11-17 11:11:01.358896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:36.903 [2024-11-17 11:11:01.381605] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:36.903 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:36.903 could not add new controller: failed to write to nvme-fabrics device 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.903 11:11:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.470 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.470 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:37.471 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.471 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:37.471 11:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:39.387 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:39.387 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:39.387 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.646 [2024-11-17 11:11:04.191444] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:39.646 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:39.646 could not add new controller: failed to write to nvme-fabrics device 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.646 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.216 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.216 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:40.216 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.216 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:40.216 11:11:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.753 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 [2024-11-17 11:11:06.977256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 11:11:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.318 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.318 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.318 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.318 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.318 11:11:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 [2024-11-17 11:11:09.814738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.228 11:11:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.167 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.167 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:46.167 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.167 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:46.167 11:11:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 [2024-11-17 11:11:12.650298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.079 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:48.080 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.080 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.080 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.080 11:11:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.020 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.020 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.020 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.020 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.020 11:11:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.926 [2024-11-17 11:11:15.497343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.926 11:11:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.495 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.495 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:51.495 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.495 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:51.495 11:11:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.038 [2024-11-17 11:11:18.274864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.038 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.604 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.605 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:54.605 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.605 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:54.605 11:11:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:56.511 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:56.511 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:56.511 11:11:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.511 [2024-11-17 11:11:21.151912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.511 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 [2024-11-17 11:11:21.199962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 [2024-11-17 11:11:21.248116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 [2024-11-17 11:11:21.296283] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.771 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 [2024-11-17 11:11:21.344454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:56.772 "tick_rate": 2700000000, 00:16:56.772 "poll_groups": [ 00:16:56.772 { 00:16:56.772 "name": "nvmf_tgt_poll_group_000", 00:16:56.772 "admin_qpairs": 2, 00:16:56.772 "io_qpairs": 84, 00:16:56.772 "current_admin_qpairs": 0, 00:16:56.772 "current_io_qpairs": 0, 00:16:56.772 "pending_bdev_io": 0, 00:16:56.772 "completed_nvme_io": 184, 00:16:56.772 "transports": [ 00:16:56.772 { 00:16:56.772 "trtype": "TCP" 00:16:56.772 } 00:16:56.772 ] 00:16:56.772 }, 00:16:56.772 { 00:16:56.772 "name": "nvmf_tgt_poll_group_001", 00:16:56.772 "admin_qpairs": 2, 00:16:56.772 "io_qpairs": 84, 00:16:56.772 "current_admin_qpairs": 0, 00:16:56.772 "current_io_qpairs": 0, 00:16:56.772 "pending_bdev_io": 0, 00:16:56.772 "completed_nvme_io": 184, 00:16:56.772 "transports": [ 00:16:56.772 { 00:16:56.772 "trtype": "TCP" 00:16:56.772 } 00:16:56.772 ] 00:16:56.772 }, 00:16:56.772 { 00:16:56.772 "name": "nvmf_tgt_poll_group_002", 00:16:56.772 "admin_qpairs": 1, 00:16:56.772 "io_qpairs": 84, 00:16:56.772 "current_admin_qpairs": 0, 00:16:56.772 "current_io_qpairs": 0, 00:16:56.772 "pending_bdev_io": 0, 00:16:56.772 "completed_nvme_io": 135, 00:16:56.772 "transports": [ 00:16:56.772 { 00:16:56.772 "trtype": "TCP" 00:16:56.772 } 00:16:56.772 ] 00:16:56.772 }, 00:16:56.772 { 00:16:56.772 "name": "nvmf_tgt_poll_group_003", 00:16:56.772 "admin_qpairs": 2, 00:16:56.772 "io_qpairs": 84, 00:16:56.772 "current_admin_qpairs": 0, 00:16:56.772 "current_io_qpairs": 0, 00:16:56.772 "pending_bdev_io": 0, 00:16:56.772 "completed_nvme_io": 183, 00:16:56.772 "transports": [ 00:16:56.772 { 00:16:56.772 "trtype": "TCP" 00:16:56.772 } 00:16:56.772 ] 00:16:56.772 } 00:16:56.772 ] 00:16:56.772 }' 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:56.772 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.033 rmmod nvme_tcp 00:16:57.033 rmmod nvme_fabrics 00:16:57.033 rmmod nvme_keyring 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 208392 ']' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 208392 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 208392 ']' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 208392 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 208392 00:16:57.033 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.034 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.034 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 208392' 00:16:57.034 killing process with pid 208392 00:16:57.034 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 208392 00:16:57.034 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 208392 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.295 11:11:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.208 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:59.208 00:16:59.208 real 0m25.524s 00:16:59.208 user 1m22.881s 00:16:59.208 sys 0m4.235s 00:16:59.208 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.208 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.208 ************************************ 00:16:59.208 END TEST nvmf_rpc 00:16:59.208 ************************************ 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:59.468 ************************************ 00:16:59.468 START TEST nvmf_invalid 00:16:59.468 ************************************ 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:59.468 * Looking for test storage... 00:16:59.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:59.468 11:11:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:59.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.468 --rc genhtml_branch_coverage=1 00:16:59.468 --rc genhtml_function_coverage=1 00:16:59.468 --rc genhtml_legend=1 00:16:59.468 --rc geninfo_all_blocks=1 00:16:59.468 --rc geninfo_unexecuted_blocks=1 00:16:59.468 00:16:59.468 ' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:59.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.468 --rc genhtml_branch_coverage=1 00:16:59.468 --rc genhtml_function_coverage=1 00:16:59.468 --rc genhtml_legend=1 00:16:59.468 --rc geninfo_all_blocks=1 00:16:59.468 --rc geninfo_unexecuted_blocks=1 00:16:59.468 00:16:59.468 ' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:59.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.468 --rc genhtml_branch_coverage=1 00:16:59.468 --rc genhtml_function_coverage=1 00:16:59.468 --rc genhtml_legend=1 00:16:59.468 --rc geninfo_all_blocks=1 00:16:59.468 --rc geninfo_unexecuted_blocks=1 00:16:59.468 00:16:59.468 ' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:59.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.468 --rc genhtml_branch_coverage=1 00:16:59.468 --rc genhtml_function_coverage=1 00:16:59.468 --rc genhtml_legend=1 00:16:59.468 --rc geninfo_all_blocks=1 00:16:59.468 --rc geninfo_unexecuted_blocks=1 00:16:59.468 00:16:59.468 ' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.468 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:59.469 11:11:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.007 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.008 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:17:02.008 00:17:02.008 --- 10.0.0.2 ping statistics --- 00:17:02.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.008 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:17:02.008 00:17:02.008 --- 10.0.0.1 ping statistics --- 00:17:02.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.008 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=212887 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 212887 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 212887 ']' 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.008 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.009 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.009 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.009 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.009 [2024-11-17 11:11:26.465724] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:02.009 [2024-11-17 11:11:26.465805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.009 [2024-11-17 11:11:26.541760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.009 [2024-11-17 11:11:26.588819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.009 [2024-11-17 11:11:26.588903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.009 [2024-11-17 11:11:26.588931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.009 [2024-11-17 11:11:26.588943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.009 [2024-11-17 11:11:26.588953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.009 [2024-11-17 11:11:26.590446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.009 [2024-11-17 11:11:26.590567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.009 [2024-11-17 11:11:26.590645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.009 [2024-11-17 11:11:26.590650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:02.267 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6610 00:17:02.525 [2024-11-17 11:11:26.978815] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:02.525 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:02.525 { 00:17:02.525 "nqn": "nqn.2016-06.io.spdk:cnode6610", 00:17:02.525 "tgt_name": "foobar", 00:17:02.525 "method": "nvmf_create_subsystem", 00:17:02.525 "req_id": 1 00:17:02.525 } 00:17:02.525 Got JSON-RPC error response 00:17:02.525 response: 00:17:02.525 { 00:17:02.525 "code": -32603, 00:17:02.525 "message": "Unable to find target foobar" 00:17:02.525 }' 00:17:02.525 11:11:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:02.525 { 00:17:02.525 "nqn": "nqn.2016-06.io.spdk:cnode6610", 00:17:02.525 "tgt_name": "foobar", 00:17:02.525 "method": "nvmf_create_subsystem", 00:17:02.525 "req_id": 1 00:17:02.525 } 00:17:02.525 Got JSON-RPC error response 00:17:02.525 response: 00:17:02.525 { 00:17:02.525 "code": -32603, 00:17:02.525 "message": "Unable to find target foobar" 00:17:02.525 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:02.525 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:02.525 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29434 00:17:02.784 [2024-11-17 11:11:27.287874] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29434: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:02.784 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:02.784 { 00:17:02.784 "nqn": "nqn.2016-06.io.spdk:cnode29434", 00:17:02.784 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:02.784 "method": "nvmf_create_subsystem", 00:17:02.784 "req_id": 1 00:17:02.784 } 00:17:02.784 Got JSON-RPC error response 00:17:02.784 response: 00:17:02.784 { 00:17:02.784 "code": -32602, 00:17:02.784 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:02.784 }' 00:17:02.784 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:02.784 { 00:17:02.784 "nqn": "nqn.2016-06.io.spdk:cnode29434", 00:17:02.784 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:02.784 "method": "nvmf_create_subsystem", 00:17:02.784 "req_id": 1 00:17:02.784 } 00:17:02.784 Got JSON-RPC error response 00:17:02.784 response: 00:17:02.784 { 00:17:02.784 "code": -32602, 00:17:02.784 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:02.784 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:02.784 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:02.784 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16029 00:17:03.043 [2024-11-17 11:11:27.560776] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16029: invalid model number 'SPDK_Controller' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:03.043 { 00:17:03.043 "nqn": "nqn.2016-06.io.spdk:cnode16029", 00:17:03.043 "model_number": "SPDK_Controller\u001f", 00:17:03.043 "method": "nvmf_create_subsystem", 00:17:03.043 "req_id": 1 00:17:03.043 } 00:17:03.043 Got JSON-RPC error response 00:17:03.043 response: 00:17:03.043 { 00:17:03.043 "code": -32602, 00:17:03.043 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.043 }' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:03.043 { 00:17:03.043 "nqn": "nqn.2016-06.io.spdk:cnode16029", 00:17:03.043 "model_number": "SPDK_Controller\u001f", 00:17:03.043 "method": "nvmf_create_subsystem", 00:17:03.043 "req_id": 1 00:17:03.043 } 00:17:03.043 Got JSON-RPC error response 00:17:03.043 response: 00:17:03.043 { 00:17:03.043 "code": -32602, 00:17:03.043 "message": "Invalid MN SPDK_Controller\u001f" 00:17:03.043 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:03.043 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''1]'\''F(#*d7@"mWxE~9@' 00:17:03.044 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''1]'\''F(#*d7@"mWxE~9@' nqn.2016-06.io.spdk:cnode31623 00:17:03.303 [2024-11-17 11:11:27.913965] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31623: invalid serial number ''1]'F(#*d7@"mWxE~9@' 00:17:03.303 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:03.303 { 00:17:03.303 "nqn": "nqn.2016-06.io.spdk:cnode31623", 00:17:03.303 "serial_number": "'\''\u007f1]'\''F(\u007f#*d7@\"mWxE~9@", 00:17:03.303 "method": "nvmf_create_subsystem", 00:17:03.303 "req_id": 1 00:17:03.303 } 00:17:03.303 Got JSON-RPC error response 00:17:03.303 response: 00:17:03.303 { 00:17:03.303 "code": -32602, 00:17:03.303 "message": "Invalid SN '\''\u007f1]'\''F(\u007f#*d7@\"mWxE~9@" 00:17:03.303 }' 00:17:03.303 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:03.303 { 00:17:03.303 "nqn": "nqn.2016-06.io.spdk:cnode31623", 00:17:03.303 "serial_number": "'\u007f1]'F(\u007f#*d7@\"mWxE~9@", 00:17:03.303 "method": "nvmf_create_subsystem", 00:17:03.303 "req_id": 1 00:17:03.303 } 00:17:03.303 Got JSON-RPC error response 00:17:03.303 response: 00:17:03.303 { 00:17:03.303 "code": -32602, 00:17:03.304 "message": "Invalid SN '\u007f1]'F(\u007f#*d7@\"mWxE~9@" 00:17:03.304 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:03.304 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:03.563 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.564 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NudOOJ'\''nWG"uLFQ\co\U?p\~X{)/lS>7yeck_VAi$' 00:17:03.565 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'NudOOJ'\''nWG"uLFQ\co\U?p\~X{)/lS>7yeck_VAi$' nqn.2016-06.io.spdk:cnode11047 00:17:03.824 [2024-11-17 11:11:28.319294] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11047: invalid model number 'NudOOJ'nWG"uLFQ\co\U?p\~X{)/lS>7yeck_VAi$' 00:17:03.824 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:03.824 { 00:17:03.824 "nqn": "nqn.2016-06.io.spdk:cnode11047", 00:17:03.824 "model_number": "NudOOJ'\''nWG\"uLFQ\\co\\U?p\\~X{)/lS>7yeck_VAi$", 00:17:03.824 "method": "nvmf_create_subsystem", 00:17:03.824 "req_id": 1 00:17:03.824 } 00:17:03.824 Got JSON-RPC error response 00:17:03.824 response: 00:17:03.824 { 00:17:03.824 "code": -32602, 00:17:03.824 "message": "Invalid MN NudOOJ'\''nWG\"uLFQ\\co\\U?p\\~X{)/lS>7yeck_VAi$" 00:17:03.824 }' 00:17:03.824 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:03.824 { 00:17:03.824 "nqn": "nqn.2016-06.io.spdk:cnode11047", 00:17:03.824 "model_number": "NudOOJ'nWG\"uLFQ\\co\\U?p\\~X{)/lS>7yeck_VAi$", 00:17:03.824 "method": "nvmf_create_subsystem", 00:17:03.824 "req_id": 1 00:17:03.824 } 00:17:03.824 Got JSON-RPC error response 00:17:03.824 response: 00:17:03.824 { 00:17:03.824 "code": -32602, 00:17:03.824 "message": "Invalid MN NudOOJ'nWG\"uLFQ\\co\\U?p\\~X{)/lS>7yeck_VAi$" 00:17:03.824 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.824 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:04.083 [2024-11-17 11:11:28.592281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.083 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:04.341 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:04.341 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:04.341 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:04.341 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:04.342 11:11:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:04.600 [2024-11-17 11:11:29.150121] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:04.600 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:04.600 { 00:17:04.600 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.600 "listen_address": { 00:17:04.600 "trtype": "tcp", 00:17:04.600 "traddr": "", 00:17:04.600 "trsvcid": "4421" 00:17:04.600 }, 00:17:04.600 "method": "nvmf_subsystem_remove_listener", 00:17:04.600 "req_id": 1 00:17:04.600 } 00:17:04.600 Got JSON-RPC error response 00:17:04.600 response: 00:17:04.600 { 00:17:04.600 "code": -32602, 00:17:04.600 "message": "Invalid parameters" 00:17:04.600 }' 00:17:04.600 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:04.600 { 00:17:04.600 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.600 "listen_address": { 00:17:04.600 "trtype": "tcp", 00:17:04.600 "traddr": "", 00:17:04.600 "trsvcid": "4421" 00:17:04.600 }, 00:17:04.600 "method": "nvmf_subsystem_remove_listener", 00:17:04.600 "req_id": 1 00:17:04.600 } 00:17:04.600 Got JSON-RPC error response 00:17:04.600 response: 00:17:04.600 { 00:17:04.600 "code": -32602, 00:17:04.600 "message": "Invalid parameters" 00:17:04.600 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:04.600 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31859 -i 0 00:17:04.857 [2024-11-17 11:11:29.414990] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31859: invalid cntlid range [0-65519] 00:17:04.857 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:04.857 { 00:17:04.857 "nqn": "nqn.2016-06.io.spdk:cnode31859", 00:17:04.857 "min_cntlid": 0, 00:17:04.857 "method": "nvmf_create_subsystem", 00:17:04.857 "req_id": 1 00:17:04.857 } 00:17:04.857 Got JSON-RPC error response 00:17:04.857 response: 00:17:04.857 { 00:17:04.857 "code": -32602, 00:17:04.857 "message": "Invalid cntlid range [0-65519]" 00:17:04.857 }' 00:17:04.857 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:04.857 { 00:17:04.857 "nqn": "nqn.2016-06.io.spdk:cnode31859", 00:17:04.857 "min_cntlid": 0, 00:17:04.857 "method": "nvmf_create_subsystem", 00:17:04.857 "req_id": 1 00:17:04.857 } 00:17:04.857 Got JSON-RPC error response 00:17:04.857 response: 00:17:04.857 { 00:17:04.857 "code": -32602, 00:17:04.857 "message": "Invalid cntlid range [0-65519]" 00:17:04.857 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.857 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode835 -i 65520 00:17:05.115 [2024-11-17 11:11:29.687955] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode835: invalid cntlid range [65520-65519] 00:17:05.115 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:05.115 { 00:17:05.115 "nqn": "nqn.2016-06.io.spdk:cnode835", 00:17:05.115 "min_cntlid": 65520, 00:17:05.115 "method": "nvmf_create_subsystem", 00:17:05.115 "req_id": 1 00:17:05.115 } 00:17:05.115 Got JSON-RPC error response 00:17:05.115 response: 00:17:05.115 { 00:17:05.115 "code": -32602, 00:17:05.115 "message": "Invalid cntlid range [65520-65519]" 00:17:05.115 }' 00:17:05.115 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:05.115 { 00:17:05.115 "nqn": "nqn.2016-06.io.spdk:cnode835", 00:17:05.115 "min_cntlid": 65520, 00:17:05.115 "method": "nvmf_create_subsystem", 00:17:05.115 "req_id": 1 00:17:05.115 } 00:17:05.115 Got JSON-RPC error response 00:17:05.115 response: 00:17:05.115 { 00:17:05.115 "code": -32602, 00:17:05.115 "message": "Invalid cntlid range [65520-65519]" 00:17:05.115 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.115 11:11:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode464 -I 0 00:17:05.373 [2024-11-17 11:11:29.980893] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode464: invalid cntlid range [1-0] 00:17:05.373 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:05.373 { 00:17:05.373 "nqn": "nqn.2016-06.io.spdk:cnode464", 00:17:05.373 "max_cntlid": 0, 00:17:05.373 "method": "nvmf_create_subsystem", 00:17:05.373 "req_id": 1 00:17:05.373 } 00:17:05.373 Got JSON-RPC error response 00:17:05.373 response: 00:17:05.373 { 00:17:05.373 "code": -32602, 00:17:05.373 "message": "Invalid cntlid range [1-0]" 00:17:05.373 }' 00:17:05.373 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:05.373 { 00:17:05.373 "nqn": "nqn.2016-06.io.spdk:cnode464", 00:17:05.373 "max_cntlid": 0, 00:17:05.373 "method": "nvmf_create_subsystem", 00:17:05.373 "req_id": 1 00:17:05.373 } 00:17:05.373 Got JSON-RPC error response 00:17:05.373 response: 00:17:05.373 { 00:17:05.373 "code": -32602, 00:17:05.373 "message": "Invalid cntlid range [1-0]" 00:17:05.373 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.373 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19782 -I 65520 00:17:05.631 [2024-11-17 11:11:30.269922] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19782: invalid cntlid range [1-65520] 00:17:05.890 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:05.890 { 00:17:05.890 "nqn": "nqn.2016-06.io.spdk:cnode19782", 00:17:05.890 "max_cntlid": 65520, 00:17:05.890 "method": "nvmf_create_subsystem", 00:17:05.890 "req_id": 1 00:17:05.890 } 00:17:05.890 Got JSON-RPC error response 00:17:05.890 response: 00:17:05.890 { 00:17:05.890 "code": -32602, 00:17:05.890 "message": "Invalid cntlid range [1-65520]" 00:17:05.890 }' 00:17:05.890 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:05.890 { 00:17:05.890 "nqn": "nqn.2016-06.io.spdk:cnode19782", 00:17:05.890 "max_cntlid": 65520, 00:17:05.890 "method": "nvmf_create_subsystem", 00:17:05.890 "req_id": 1 00:17:05.890 } 00:17:05.890 Got JSON-RPC error response 00:17:05.890 response: 00:17:05.890 { 00:17:05.890 "code": -32602, 00:17:05.890 "message": "Invalid cntlid range [1-65520]" 00:17:05.890 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.890 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3122 -i 6 -I 5 00:17:06.150 [2024-11-17 11:11:30.546839] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3122: invalid cntlid range [6-5] 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:06.150 { 00:17:06.150 "nqn": "nqn.2016-06.io.spdk:cnode3122", 00:17:06.150 "min_cntlid": 6, 00:17:06.150 "max_cntlid": 5, 00:17:06.150 "method": "nvmf_create_subsystem", 00:17:06.150 "req_id": 1 00:17:06.150 } 00:17:06.150 Got JSON-RPC error response 00:17:06.150 response: 00:17:06.150 { 00:17:06.150 "code": -32602, 00:17:06.150 "message": "Invalid cntlid range [6-5]" 00:17:06.150 }' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:06.150 { 00:17:06.150 "nqn": "nqn.2016-06.io.spdk:cnode3122", 00:17:06.150 "min_cntlid": 6, 00:17:06.150 "max_cntlid": 5, 00:17:06.150 "method": "nvmf_create_subsystem", 00:17:06.150 "req_id": 1 00:17:06.150 } 00:17:06.150 Got JSON-RPC error response 00:17:06.150 response: 00:17:06.150 { 00:17:06.150 "code": -32602, 00:17:06.150 "message": "Invalid cntlid range [6-5]" 00:17:06.150 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:06.150 { 00:17:06.150 "name": "foobar", 00:17:06.150 "method": "nvmf_delete_target", 00:17:06.150 "req_id": 1 00:17:06.150 } 00:17:06.150 Got JSON-RPC error response 00:17:06.150 response: 00:17:06.150 { 00:17:06.150 "code": -32602, 00:17:06.150 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:06.150 }' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:06.150 { 00:17:06.150 "name": "foobar", 00:17:06.150 "method": "nvmf_delete_target", 00:17:06.150 "req_id": 1 00:17:06.150 } 00:17:06.150 Got JSON-RPC error response 00:17:06.150 response: 00:17:06.150 { 00:17:06.150 "code": -32602, 00:17:06.150 "message": "The specified target doesn't exist, cannot delete it." 00:17:06.150 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.150 rmmod nvme_tcp 00:17:06.150 rmmod nvme_fabrics 00:17:06.150 rmmod nvme_keyring 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 212887 ']' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 212887 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 212887 ']' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 212887 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 212887 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 212887' 00:17:06.150 killing process with pid 212887 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 212887 00:17:06.150 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 212887 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.410 11:11:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.954 00:17:08.954 real 0m9.145s 00:17:08.954 user 0m21.746s 00:17:08.954 sys 0m2.501s 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:08.954 ************************************ 00:17:08.954 END TEST nvmf_invalid 00:17:08.954 ************************************ 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.954 ************************************ 00:17:08.954 START TEST nvmf_connect_stress 00:17:08.954 ************************************ 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:08.954 * Looking for test storage... 00:17:08.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:08.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.954 --rc genhtml_branch_coverage=1 00:17:08.954 --rc genhtml_function_coverage=1 00:17:08.954 --rc genhtml_legend=1 00:17:08.954 --rc geninfo_all_blocks=1 00:17:08.954 --rc geninfo_unexecuted_blocks=1 00:17:08.954 00:17:08.954 ' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:08.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.954 --rc genhtml_branch_coverage=1 00:17:08.954 --rc genhtml_function_coverage=1 00:17:08.954 --rc genhtml_legend=1 00:17:08.954 --rc geninfo_all_blocks=1 00:17:08.954 --rc geninfo_unexecuted_blocks=1 00:17:08.954 00:17:08.954 ' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:08.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.954 --rc genhtml_branch_coverage=1 00:17:08.954 --rc genhtml_function_coverage=1 00:17:08.954 --rc genhtml_legend=1 00:17:08.954 --rc geninfo_all_blocks=1 00:17:08.954 --rc geninfo_unexecuted_blocks=1 00:17:08.954 00:17:08.954 ' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:08.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.954 --rc genhtml_branch_coverage=1 00:17:08.954 --rc genhtml_function_coverage=1 00:17:08.954 --rc genhtml_legend=1 00:17:08.954 --rc geninfo_all_blocks=1 00:17:08.954 --rc geninfo_unexecuted_blocks=1 00:17:08.954 00:17:08.954 ' 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.954 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.955 11:11:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:10.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:10.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:10.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.910 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:10.911 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:10.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:17:10.911 00:17:10.911 --- 10.0.0.2 ping statistics --- 00:17:10.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.911 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:17:10.911 00:17:10.911 --- 10.0.0.1 ping statistics --- 00:17:10.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.911 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=215560 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 215560 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 215560 ']' 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.911 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.172 [2024-11-17 11:11:35.609475] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:11.172 [2024-11-17 11:11:35.609597] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.172 [2024-11-17 11:11:35.678806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:11.172 [2024-11-17 11:11:35.721274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.172 [2024-11-17 11:11:35.721334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.172 [2024-11-17 11:11:35.721363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.172 [2024-11-17 11:11:35.721374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.172 [2024-11-17 11:11:35.721383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.172 [2024-11-17 11:11:35.722829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.172 [2024-11-17 11:11:35.722875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.172 [2024-11-17 11:11:35.722878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.431 [2024-11-17 11:11:35.866093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.431 [2024-11-17 11:11:35.883455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.431 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.432 NULL1 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=215667 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.432 11:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.692 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.692 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:11.692 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.692 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.692 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.953 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.953 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:11.953 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.953 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.953 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.525 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.525 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:12.525 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.525 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.525 11:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.784 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.784 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:12.784 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.784 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.784 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.042 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.042 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:13.042 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.042 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.042 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.302 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.302 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:13.302 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.302 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.302 11:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.563 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.563 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:13.563 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.563 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.563 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.132 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.133 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:14.133 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.133 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.133 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.391 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.391 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:14.391 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.391 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.391 11:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.653 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:14.653 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.653 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.653 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.929 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.929 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:14.929 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.929 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.929 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.188 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.188 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:15.188 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.188 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.188 11:11:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.756 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.756 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:15.756 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.756 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.756 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.014 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.014 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:16.014 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.014 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.014 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.275 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.275 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:16.275 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.275 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.275 11:11:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.534 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.534 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:16.534 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.534 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.534 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.794 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.794 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:16.794 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.794 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.794 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.364 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.364 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:17.364 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.364 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.364 11:11:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.623 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.623 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:17.623 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.623 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.623 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.884 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.884 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:17.884 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.884 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.884 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.145 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.145 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:18.145 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.145 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.145 11:11:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.405 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.406 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:18.406 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.406 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.406 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.973 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.973 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:18.973 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.973 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.973 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.231 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.231 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:19.231 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.231 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.231 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.491 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.491 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:19.491 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.491 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.491 11:11:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.751 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.751 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:19.752 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.752 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.752 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.011 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.011 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:20.011 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.011 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.011 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.580 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:20.580 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.580 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.580 11:11:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.839 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.839 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:20.839 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.839 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.839 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.098 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.098 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:21.098 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.098 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.098 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.358 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.358 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:21.358 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.358 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.358 11:11:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.617 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 215667 00:17:21.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (215667) - No such process 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 215667 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:21.617 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:21.617 rmmod nvme_tcp 00:17:21.617 rmmod nvme_fabrics 00:17:21.617 rmmod nvme_keyring 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 215560 ']' 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 215560 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 215560 ']' 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 215560 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 215560 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 215560' 00:17:21.876 killing process with pid 215560 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 215560 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 215560 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.876 11:11:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:24.415 00:17:24.415 real 0m15.490s 00:17:24.415 user 0m40.009s 00:17:24.415 sys 0m4.712s 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.415 ************************************ 00:17:24.415 END TEST nvmf_connect_stress 00:17:24.415 ************************************ 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:24.415 ************************************ 00:17:24.415 START TEST nvmf_fused_ordering 00:17:24.415 ************************************ 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:24.415 * Looking for test storage... 00:17:24.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:24.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.415 --rc genhtml_branch_coverage=1 00:17:24.415 --rc genhtml_function_coverage=1 00:17:24.415 --rc genhtml_legend=1 00:17:24.415 --rc geninfo_all_blocks=1 00:17:24.415 --rc geninfo_unexecuted_blocks=1 00:17:24.415 00:17:24.415 ' 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:24.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.415 --rc genhtml_branch_coverage=1 00:17:24.415 --rc genhtml_function_coverage=1 00:17:24.415 --rc genhtml_legend=1 00:17:24.415 --rc geninfo_all_blocks=1 00:17:24.415 --rc geninfo_unexecuted_blocks=1 00:17:24.415 00:17:24.415 ' 00:17:24.415 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:24.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.415 --rc genhtml_branch_coverage=1 00:17:24.415 --rc genhtml_function_coverage=1 00:17:24.415 --rc genhtml_legend=1 00:17:24.416 --rc geninfo_all_blocks=1 00:17:24.416 --rc geninfo_unexecuted_blocks=1 00:17:24.416 00:17:24.416 ' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:24.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.416 --rc genhtml_branch_coverage=1 00:17:24.416 --rc genhtml_function_coverage=1 00:17:24.416 --rc genhtml_legend=1 00:17:24.416 --rc geninfo_all_blocks=1 00:17:24.416 --rc geninfo_unexecuted_blocks=1 00:17:24.416 00:17:24.416 ' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:24.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:24.416 11:11:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:26.952 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:26.952 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:26.952 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:26.953 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:26.953 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:17:26.953 00:17:26.953 --- 10.0.0.2 ping statistics --- 00:17:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.953 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:17:26.953 00:17:26.953 --- 10.0.0.1 ping statistics --- 00:17:26.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.953 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=218819 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 218819 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 218819 ']' 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.953 [2024-11-17 11:11:51.278555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:26.953 [2024-11-17 11:11:51.278651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.953 [2024-11-17 11:11:51.354939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.953 [2024-11-17 11:11:51.403069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.953 [2024-11-17 11:11:51.403125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.953 [2024-11-17 11:11:51.403152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.953 [2024-11-17 11:11:51.403163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.953 [2024-11-17 11:11:51.403173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.953 [2024-11-17 11:11:51.403828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.953 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.954 [2024-11-17 11:11:51.547786] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.954 [2024-11-17 11:11:51.564020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.954 NULL1 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.954 11:11:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:26.954 [2024-11-17 11:11:51.606697] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:26.954 [2024-11-17 11:11:51.606734] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218966 ] 00:17:27.526 Attached to nqn.2016-06.io.spdk:cnode1 00:17:27.526 Namespace ID: 1 size: 1GB 00:17:27.526 fused_ordering(0) 00:17:27.526 fused_ordering(1) 00:17:27.526 fused_ordering(2) 00:17:27.526 fused_ordering(3) 00:17:27.526 fused_ordering(4) 00:17:27.526 fused_ordering(5) 00:17:27.526 fused_ordering(6) 00:17:27.526 fused_ordering(7) 00:17:27.526 fused_ordering(8) 00:17:27.526 fused_ordering(9) 00:17:27.526 fused_ordering(10) 00:17:27.526 fused_ordering(11) 00:17:27.526 fused_ordering(12) 00:17:27.526 fused_ordering(13) 00:17:27.526 fused_ordering(14) 00:17:27.526 fused_ordering(15) 00:17:27.526 fused_ordering(16) 00:17:27.526 fused_ordering(17) 00:17:27.526 fused_ordering(18) 00:17:27.526 fused_ordering(19) 00:17:27.526 fused_ordering(20) 00:17:27.526 fused_ordering(21) 00:17:27.526 fused_ordering(22) 00:17:27.526 fused_ordering(23) 00:17:27.526 fused_ordering(24) 00:17:27.526 fused_ordering(25) 00:17:27.526 fused_ordering(26) 00:17:27.526 fused_ordering(27) 00:17:27.526 fused_ordering(28) 00:17:27.526 fused_ordering(29) 00:17:27.526 fused_ordering(30) 00:17:27.526 fused_ordering(31) 00:17:27.526 fused_ordering(32) 00:17:27.526 fused_ordering(33) 00:17:27.526 fused_ordering(34) 00:17:27.526 fused_ordering(35) 00:17:27.526 fused_ordering(36) 00:17:27.526 fused_ordering(37) 00:17:27.526 fused_ordering(38) 00:17:27.526 fused_ordering(39) 00:17:27.526 fused_ordering(40) 00:17:27.526 fused_ordering(41) 00:17:27.526 fused_ordering(42) 00:17:27.526 fused_ordering(43) 00:17:27.526 fused_ordering(44) 00:17:27.526 fused_ordering(45) 00:17:27.526 fused_ordering(46) 00:17:27.526 fused_ordering(47) 00:17:27.526 fused_ordering(48) 00:17:27.526 fused_ordering(49) 00:17:27.526 fused_ordering(50) 00:17:27.526 fused_ordering(51) 00:17:27.526 fused_ordering(52) 00:17:27.526 fused_ordering(53) 00:17:27.526 fused_ordering(54) 00:17:27.526 fused_ordering(55) 00:17:27.526 fused_ordering(56) 00:17:27.526 fused_ordering(57) 00:17:27.526 fused_ordering(58) 00:17:27.526 fused_ordering(59) 00:17:27.526 fused_ordering(60) 00:17:27.526 fused_ordering(61) 00:17:27.526 fused_ordering(62) 00:17:27.526 fused_ordering(63) 00:17:27.526 fused_ordering(64) 00:17:27.526 fused_ordering(65) 00:17:27.526 fused_ordering(66) 00:17:27.526 fused_ordering(67) 00:17:27.526 fused_ordering(68) 00:17:27.526 fused_ordering(69) 00:17:27.526 fused_ordering(70) 00:17:27.526 fused_ordering(71) 00:17:27.526 fused_ordering(72) 00:17:27.526 fused_ordering(73) 00:17:27.526 fused_ordering(74) 00:17:27.526 fused_ordering(75) 00:17:27.526 fused_ordering(76) 00:17:27.526 fused_ordering(77) 00:17:27.526 fused_ordering(78) 00:17:27.526 fused_ordering(79) 00:17:27.526 fused_ordering(80) 00:17:27.526 fused_ordering(81) 00:17:27.526 fused_ordering(82) 00:17:27.526 fused_ordering(83) 00:17:27.526 fused_ordering(84) 00:17:27.526 fused_ordering(85) 00:17:27.526 fused_ordering(86) 00:17:27.526 fused_ordering(87) 00:17:27.526 fused_ordering(88) 00:17:27.526 fused_ordering(89) 00:17:27.526 fused_ordering(90) 00:17:27.526 fused_ordering(91) 00:17:27.526 fused_ordering(92) 00:17:27.526 fused_ordering(93) 00:17:27.526 fused_ordering(94) 00:17:27.526 fused_ordering(95) 00:17:27.526 fused_ordering(96) 00:17:27.526 fused_ordering(97) 00:17:27.526 fused_ordering(98) 00:17:27.526 fused_ordering(99) 00:17:27.526 fused_ordering(100) 00:17:27.526 fused_ordering(101) 00:17:27.526 fused_ordering(102) 00:17:27.526 fused_ordering(103) 00:17:27.526 fused_ordering(104) 00:17:27.526 fused_ordering(105) 00:17:27.526 fused_ordering(106) 00:17:27.526 fused_ordering(107) 00:17:27.526 fused_ordering(108) 00:17:27.526 fused_ordering(109) 00:17:27.526 fused_ordering(110) 00:17:27.526 fused_ordering(111) 00:17:27.526 fused_ordering(112) 00:17:27.526 fused_ordering(113) 00:17:27.526 fused_ordering(114) 00:17:27.526 fused_ordering(115) 00:17:27.526 fused_ordering(116) 00:17:27.526 fused_ordering(117) 00:17:27.526 fused_ordering(118) 00:17:27.526 fused_ordering(119) 00:17:27.526 fused_ordering(120) 00:17:27.526 fused_ordering(121) 00:17:27.526 fused_ordering(122) 00:17:27.526 fused_ordering(123) 00:17:27.526 fused_ordering(124) 00:17:27.526 fused_ordering(125) 00:17:27.526 fused_ordering(126) 00:17:27.526 fused_ordering(127) 00:17:27.526 fused_ordering(128) 00:17:27.526 fused_ordering(129) 00:17:27.526 fused_ordering(130) 00:17:27.526 fused_ordering(131) 00:17:27.526 fused_ordering(132) 00:17:27.526 fused_ordering(133) 00:17:27.526 fused_ordering(134) 00:17:27.526 fused_ordering(135) 00:17:27.526 fused_ordering(136) 00:17:27.526 fused_ordering(137) 00:17:27.526 fused_ordering(138) 00:17:27.526 fused_ordering(139) 00:17:27.526 fused_ordering(140) 00:17:27.526 fused_ordering(141) 00:17:27.526 fused_ordering(142) 00:17:27.526 fused_ordering(143) 00:17:27.527 fused_ordering(144) 00:17:27.527 fused_ordering(145) 00:17:27.527 fused_ordering(146) 00:17:27.527 fused_ordering(147) 00:17:27.527 fused_ordering(148) 00:17:27.527 fused_ordering(149) 00:17:27.527 fused_ordering(150) 00:17:27.527 fused_ordering(151) 00:17:27.527 fused_ordering(152) 00:17:27.527 fused_ordering(153) 00:17:27.527 fused_ordering(154) 00:17:27.527 fused_ordering(155) 00:17:27.527 fused_ordering(156) 00:17:27.527 fused_ordering(157) 00:17:27.527 fused_ordering(158) 00:17:27.527 fused_ordering(159) 00:17:27.527 fused_ordering(160) 00:17:27.527 fused_ordering(161) 00:17:27.527 fused_ordering(162) 00:17:27.527 fused_ordering(163) 00:17:27.527 fused_ordering(164) 00:17:27.527 fused_ordering(165) 00:17:27.527 fused_ordering(166) 00:17:27.527 fused_ordering(167) 00:17:27.527 fused_ordering(168) 00:17:27.527 fused_ordering(169) 00:17:27.527 fused_ordering(170) 00:17:27.527 fused_ordering(171) 00:17:27.527 fused_ordering(172) 00:17:27.527 fused_ordering(173) 00:17:27.527 fused_ordering(174) 00:17:27.527 fused_ordering(175) 00:17:27.527 fused_ordering(176) 00:17:27.527 fused_ordering(177) 00:17:27.527 fused_ordering(178) 00:17:27.527 fused_ordering(179) 00:17:27.527 fused_ordering(180) 00:17:27.527 fused_ordering(181) 00:17:27.527 fused_ordering(182) 00:17:27.527 fused_ordering(183) 00:17:27.527 fused_ordering(184) 00:17:27.527 fused_ordering(185) 00:17:27.527 fused_ordering(186) 00:17:27.527 fused_ordering(187) 00:17:27.527 fused_ordering(188) 00:17:27.527 fused_ordering(189) 00:17:27.527 fused_ordering(190) 00:17:27.527 fused_ordering(191) 00:17:27.527 fused_ordering(192) 00:17:27.527 fused_ordering(193) 00:17:27.527 fused_ordering(194) 00:17:27.527 fused_ordering(195) 00:17:27.527 fused_ordering(196) 00:17:27.527 fused_ordering(197) 00:17:27.527 fused_ordering(198) 00:17:27.527 fused_ordering(199) 00:17:27.527 fused_ordering(200) 00:17:27.527 fused_ordering(201) 00:17:27.527 fused_ordering(202) 00:17:27.527 fused_ordering(203) 00:17:27.527 fused_ordering(204) 00:17:27.527 fused_ordering(205) 00:17:28.094 fused_ordering(206) 00:17:28.094 fused_ordering(207) 00:17:28.094 fused_ordering(208) 00:17:28.094 fused_ordering(209) 00:17:28.094 fused_ordering(210) 00:17:28.094 fused_ordering(211) 00:17:28.094 fused_ordering(212) 00:17:28.094 fused_ordering(213) 00:17:28.095 fused_ordering(214) 00:17:28.095 fused_ordering(215) 00:17:28.095 fused_ordering(216) 00:17:28.095 fused_ordering(217) 00:17:28.095 fused_ordering(218) 00:17:28.095 fused_ordering(219) 00:17:28.095 fused_ordering(220) 00:17:28.095 fused_ordering(221) 00:17:28.095 fused_ordering(222) 00:17:28.095 fused_ordering(223) 00:17:28.095 fused_ordering(224) 00:17:28.095 fused_ordering(225) 00:17:28.095 fused_ordering(226) 00:17:28.095 fused_ordering(227) 00:17:28.095 fused_ordering(228) 00:17:28.095 fused_ordering(229) 00:17:28.095 fused_ordering(230) 00:17:28.095 fused_ordering(231) 00:17:28.095 fused_ordering(232) 00:17:28.095 fused_ordering(233) 00:17:28.095 fused_ordering(234) 00:17:28.095 fused_ordering(235) 00:17:28.095 fused_ordering(236) 00:17:28.095 fused_ordering(237) 00:17:28.095 fused_ordering(238) 00:17:28.095 fused_ordering(239) 00:17:28.095 fused_ordering(240) 00:17:28.095 fused_ordering(241) 00:17:28.095 fused_ordering(242) 00:17:28.095 fused_ordering(243) 00:17:28.095 fused_ordering(244) 00:17:28.095 fused_ordering(245) 00:17:28.095 fused_ordering(246) 00:17:28.095 fused_ordering(247) 00:17:28.095 fused_ordering(248) 00:17:28.095 fused_ordering(249) 00:17:28.095 fused_ordering(250) 00:17:28.095 fused_ordering(251) 00:17:28.095 fused_ordering(252) 00:17:28.095 fused_ordering(253) 00:17:28.095 fused_ordering(254) 00:17:28.095 fused_ordering(255) 00:17:28.095 fused_ordering(256) 00:17:28.095 fused_ordering(257) 00:17:28.095 fused_ordering(258) 00:17:28.095 fused_ordering(259) 00:17:28.095 fused_ordering(260) 00:17:28.095 fused_ordering(261) 00:17:28.095 fused_ordering(262) 00:17:28.095 fused_ordering(263) 00:17:28.095 fused_ordering(264) 00:17:28.095 fused_ordering(265) 00:17:28.095 fused_ordering(266) 00:17:28.095 fused_ordering(267) 00:17:28.095 fused_ordering(268) 00:17:28.095 fused_ordering(269) 00:17:28.095 fused_ordering(270) 00:17:28.095 fused_ordering(271) 00:17:28.095 fused_ordering(272) 00:17:28.095 fused_ordering(273) 00:17:28.095 fused_ordering(274) 00:17:28.095 fused_ordering(275) 00:17:28.095 fused_ordering(276) 00:17:28.095 fused_ordering(277) 00:17:28.095 fused_ordering(278) 00:17:28.095 fused_ordering(279) 00:17:28.095 fused_ordering(280) 00:17:28.095 fused_ordering(281) 00:17:28.095 fused_ordering(282) 00:17:28.095 fused_ordering(283) 00:17:28.095 fused_ordering(284) 00:17:28.095 fused_ordering(285) 00:17:28.095 fused_ordering(286) 00:17:28.095 fused_ordering(287) 00:17:28.095 fused_ordering(288) 00:17:28.095 fused_ordering(289) 00:17:28.095 fused_ordering(290) 00:17:28.095 fused_ordering(291) 00:17:28.095 fused_ordering(292) 00:17:28.095 fused_ordering(293) 00:17:28.095 fused_ordering(294) 00:17:28.095 fused_ordering(295) 00:17:28.095 fused_ordering(296) 00:17:28.095 fused_ordering(297) 00:17:28.095 fused_ordering(298) 00:17:28.095 fused_ordering(299) 00:17:28.095 fused_ordering(300) 00:17:28.095 fused_ordering(301) 00:17:28.095 fused_ordering(302) 00:17:28.095 fused_ordering(303) 00:17:28.095 fused_ordering(304) 00:17:28.095 fused_ordering(305) 00:17:28.095 fused_ordering(306) 00:17:28.095 fused_ordering(307) 00:17:28.095 fused_ordering(308) 00:17:28.095 fused_ordering(309) 00:17:28.095 fused_ordering(310) 00:17:28.095 fused_ordering(311) 00:17:28.095 fused_ordering(312) 00:17:28.095 fused_ordering(313) 00:17:28.095 fused_ordering(314) 00:17:28.095 fused_ordering(315) 00:17:28.095 fused_ordering(316) 00:17:28.095 fused_ordering(317) 00:17:28.095 fused_ordering(318) 00:17:28.095 fused_ordering(319) 00:17:28.095 fused_ordering(320) 00:17:28.095 fused_ordering(321) 00:17:28.095 fused_ordering(322) 00:17:28.095 fused_ordering(323) 00:17:28.095 fused_ordering(324) 00:17:28.095 fused_ordering(325) 00:17:28.095 fused_ordering(326) 00:17:28.095 fused_ordering(327) 00:17:28.095 fused_ordering(328) 00:17:28.095 fused_ordering(329) 00:17:28.095 fused_ordering(330) 00:17:28.095 fused_ordering(331) 00:17:28.095 fused_ordering(332) 00:17:28.095 fused_ordering(333) 00:17:28.095 fused_ordering(334) 00:17:28.095 fused_ordering(335) 00:17:28.095 fused_ordering(336) 00:17:28.095 fused_ordering(337) 00:17:28.095 fused_ordering(338) 00:17:28.095 fused_ordering(339) 00:17:28.095 fused_ordering(340) 00:17:28.095 fused_ordering(341) 00:17:28.095 fused_ordering(342) 00:17:28.095 fused_ordering(343) 00:17:28.095 fused_ordering(344) 00:17:28.095 fused_ordering(345) 00:17:28.095 fused_ordering(346) 00:17:28.095 fused_ordering(347) 00:17:28.095 fused_ordering(348) 00:17:28.095 fused_ordering(349) 00:17:28.095 fused_ordering(350) 00:17:28.095 fused_ordering(351) 00:17:28.095 fused_ordering(352) 00:17:28.095 fused_ordering(353) 00:17:28.095 fused_ordering(354) 00:17:28.095 fused_ordering(355) 00:17:28.095 fused_ordering(356) 00:17:28.095 fused_ordering(357) 00:17:28.095 fused_ordering(358) 00:17:28.095 fused_ordering(359) 00:17:28.095 fused_ordering(360) 00:17:28.095 fused_ordering(361) 00:17:28.095 fused_ordering(362) 00:17:28.095 fused_ordering(363) 00:17:28.095 fused_ordering(364) 00:17:28.095 fused_ordering(365) 00:17:28.095 fused_ordering(366) 00:17:28.095 fused_ordering(367) 00:17:28.095 fused_ordering(368) 00:17:28.095 fused_ordering(369) 00:17:28.095 fused_ordering(370) 00:17:28.095 fused_ordering(371) 00:17:28.095 fused_ordering(372) 00:17:28.095 fused_ordering(373) 00:17:28.095 fused_ordering(374) 00:17:28.095 fused_ordering(375) 00:17:28.095 fused_ordering(376) 00:17:28.095 fused_ordering(377) 00:17:28.095 fused_ordering(378) 00:17:28.095 fused_ordering(379) 00:17:28.095 fused_ordering(380) 00:17:28.095 fused_ordering(381) 00:17:28.095 fused_ordering(382) 00:17:28.095 fused_ordering(383) 00:17:28.095 fused_ordering(384) 00:17:28.095 fused_ordering(385) 00:17:28.095 fused_ordering(386) 00:17:28.095 fused_ordering(387) 00:17:28.095 fused_ordering(388) 00:17:28.095 fused_ordering(389) 00:17:28.095 fused_ordering(390) 00:17:28.095 fused_ordering(391) 00:17:28.095 fused_ordering(392) 00:17:28.095 fused_ordering(393) 00:17:28.095 fused_ordering(394) 00:17:28.095 fused_ordering(395) 00:17:28.095 fused_ordering(396) 00:17:28.095 fused_ordering(397) 00:17:28.095 fused_ordering(398) 00:17:28.095 fused_ordering(399) 00:17:28.095 fused_ordering(400) 00:17:28.095 fused_ordering(401) 00:17:28.095 fused_ordering(402) 00:17:28.095 fused_ordering(403) 00:17:28.095 fused_ordering(404) 00:17:28.095 fused_ordering(405) 00:17:28.095 fused_ordering(406) 00:17:28.095 fused_ordering(407) 00:17:28.095 fused_ordering(408) 00:17:28.095 fused_ordering(409) 00:17:28.095 fused_ordering(410) 00:17:28.353 fused_ordering(411) 00:17:28.353 fused_ordering(412) 00:17:28.353 fused_ordering(413) 00:17:28.353 fused_ordering(414) 00:17:28.353 fused_ordering(415) 00:17:28.353 fused_ordering(416) 00:17:28.353 fused_ordering(417) 00:17:28.353 fused_ordering(418) 00:17:28.353 fused_ordering(419) 00:17:28.353 fused_ordering(420) 00:17:28.353 fused_ordering(421) 00:17:28.353 fused_ordering(422) 00:17:28.353 fused_ordering(423) 00:17:28.353 fused_ordering(424) 00:17:28.353 fused_ordering(425) 00:17:28.353 fused_ordering(426) 00:17:28.354 fused_ordering(427) 00:17:28.354 fused_ordering(428) 00:17:28.354 fused_ordering(429) 00:17:28.354 fused_ordering(430) 00:17:28.354 fused_ordering(431) 00:17:28.354 fused_ordering(432) 00:17:28.354 fused_ordering(433) 00:17:28.354 fused_ordering(434) 00:17:28.354 fused_ordering(435) 00:17:28.354 fused_ordering(436) 00:17:28.354 fused_ordering(437) 00:17:28.354 fused_ordering(438) 00:17:28.354 fused_ordering(439) 00:17:28.354 fused_ordering(440) 00:17:28.354 fused_ordering(441) 00:17:28.354 fused_ordering(442) 00:17:28.354 fused_ordering(443) 00:17:28.354 fused_ordering(444) 00:17:28.354 fused_ordering(445) 00:17:28.354 fused_ordering(446) 00:17:28.354 fused_ordering(447) 00:17:28.354 fused_ordering(448) 00:17:28.354 fused_ordering(449) 00:17:28.354 fused_ordering(450) 00:17:28.354 fused_ordering(451) 00:17:28.354 fused_ordering(452) 00:17:28.354 fused_ordering(453) 00:17:28.354 fused_ordering(454) 00:17:28.354 fused_ordering(455) 00:17:28.354 fused_ordering(456) 00:17:28.354 fused_ordering(457) 00:17:28.354 fused_ordering(458) 00:17:28.354 fused_ordering(459) 00:17:28.354 fused_ordering(460) 00:17:28.354 fused_ordering(461) 00:17:28.354 fused_ordering(462) 00:17:28.354 fused_ordering(463) 00:17:28.354 fused_ordering(464) 00:17:28.354 fused_ordering(465) 00:17:28.354 fused_ordering(466) 00:17:28.354 fused_ordering(467) 00:17:28.354 fused_ordering(468) 00:17:28.354 fused_ordering(469) 00:17:28.354 fused_ordering(470) 00:17:28.354 fused_ordering(471) 00:17:28.354 fused_ordering(472) 00:17:28.354 fused_ordering(473) 00:17:28.354 fused_ordering(474) 00:17:28.354 fused_ordering(475) 00:17:28.354 fused_ordering(476) 00:17:28.354 fused_ordering(477) 00:17:28.354 fused_ordering(478) 00:17:28.354 fused_ordering(479) 00:17:28.354 fused_ordering(480) 00:17:28.354 fused_ordering(481) 00:17:28.354 fused_ordering(482) 00:17:28.354 fused_ordering(483) 00:17:28.354 fused_ordering(484) 00:17:28.354 fused_ordering(485) 00:17:28.354 fused_ordering(486) 00:17:28.354 fused_ordering(487) 00:17:28.354 fused_ordering(488) 00:17:28.354 fused_ordering(489) 00:17:28.354 fused_ordering(490) 00:17:28.354 fused_ordering(491) 00:17:28.354 fused_ordering(492) 00:17:28.354 fused_ordering(493) 00:17:28.354 fused_ordering(494) 00:17:28.354 fused_ordering(495) 00:17:28.354 fused_ordering(496) 00:17:28.354 fused_ordering(497) 00:17:28.354 fused_ordering(498) 00:17:28.354 fused_ordering(499) 00:17:28.354 fused_ordering(500) 00:17:28.354 fused_ordering(501) 00:17:28.354 fused_ordering(502) 00:17:28.354 fused_ordering(503) 00:17:28.354 fused_ordering(504) 00:17:28.354 fused_ordering(505) 00:17:28.354 fused_ordering(506) 00:17:28.354 fused_ordering(507) 00:17:28.354 fused_ordering(508) 00:17:28.354 fused_ordering(509) 00:17:28.354 fused_ordering(510) 00:17:28.354 fused_ordering(511) 00:17:28.354 fused_ordering(512) 00:17:28.354 fused_ordering(513) 00:17:28.354 fused_ordering(514) 00:17:28.354 fused_ordering(515) 00:17:28.354 fused_ordering(516) 00:17:28.354 fused_ordering(517) 00:17:28.354 fused_ordering(518) 00:17:28.354 fused_ordering(519) 00:17:28.354 fused_ordering(520) 00:17:28.354 fused_ordering(521) 00:17:28.354 fused_ordering(522) 00:17:28.354 fused_ordering(523) 00:17:28.354 fused_ordering(524) 00:17:28.354 fused_ordering(525) 00:17:28.354 fused_ordering(526) 00:17:28.354 fused_ordering(527) 00:17:28.354 fused_ordering(528) 00:17:28.354 fused_ordering(529) 00:17:28.354 fused_ordering(530) 00:17:28.354 fused_ordering(531) 00:17:28.354 fused_ordering(532) 00:17:28.354 fused_ordering(533) 00:17:28.354 fused_ordering(534) 00:17:28.354 fused_ordering(535) 00:17:28.354 fused_ordering(536) 00:17:28.354 fused_ordering(537) 00:17:28.354 fused_ordering(538) 00:17:28.354 fused_ordering(539) 00:17:28.354 fused_ordering(540) 00:17:28.354 fused_ordering(541) 00:17:28.354 fused_ordering(542) 00:17:28.354 fused_ordering(543) 00:17:28.354 fused_ordering(544) 00:17:28.354 fused_ordering(545) 00:17:28.354 fused_ordering(546) 00:17:28.354 fused_ordering(547) 00:17:28.354 fused_ordering(548) 00:17:28.354 fused_ordering(549) 00:17:28.354 fused_ordering(550) 00:17:28.354 fused_ordering(551) 00:17:28.354 fused_ordering(552) 00:17:28.354 fused_ordering(553) 00:17:28.354 fused_ordering(554) 00:17:28.354 fused_ordering(555) 00:17:28.354 fused_ordering(556) 00:17:28.354 fused_ordering(557) 00:17:28.354 fused_ordering(558) 00:17:28.354 fused_ordering(559) 00:17:28.354 fused_ordering(560) 00:17:28.354 fused_ordering(561) 00:17:28.354 fused_ordering(562) 00:17:28.354 fused_ordering(563) 00:17:28.354 fused_ordering(564) 00:17:28.354 fused_ordering(565) 00:17:28.354 fused_ordering(566) 00:17:28.354 fused_ordering(567) 00:17:28.354 fused_ordering(568) 00:17:28.354 fused_ordering(569) 00:17:28.354 fused_ordering(570) 00:17:28.354 fused_ordering(571) 00:17:28.354 fused_ordering(572) 00:17:28.354 fused_ordering(573) 00:17:28.354 fused_ordering(574) 00:17:28.354 fused_ordering(575) 00:17:28.354 fused_ordering(576) 00:17:28.354 fused_ordering(577) 00:17:28.354 fused_ordering(578) 00:17:28.354 fused_ordering(579) 00:17:28.354 fused_ordering(580) 00:17:28.354 fused_ordering(581) 00:17:28.354 fused_ordering(582) 00:17:28.354 fused_ordering(583) 00:17:28.354 fused_ordering(584) 00:17:28.354 fused_ordering(585) 00:17:28.354 fused_ordering(586) 00:17:28.354 fused_ordering(587) 00:17:28.354 fused_ordering(588) 00:17:28.354 fused_ordering(589) 00:17:28.354 fused_ordering(590) 00:17:28.354 fused_ordering(591) 00:17:28.354 fused_ordering(592) 00:17:28.354 fused_ordering(593) 00:17:28.354 fused_ordering(594) 00:17:28.354 fused_ordering(595) 00:17:28.354 fused_ordering(596) 00:17:28.354 fused_ordering(597) 00:17:28.354 fused_ordering(598) 00:17:28.354 fused_ordering(599) 00:17:28.354 fused_ordering(600) 00:17:28.354 fused_ordering(601) 00:17:28.354 fused_ordering(602) 00:17:28.354 fused_ordering(603) 00:17:28.354 fused_ordering(604) 00:17:28.354 fused_ordering(605) 00:17:28.354 fused_ordering(606) 00:17:28.354 fused_ordering(607) 00:17:28.354 fused_ordering(608) 00:17:28.354 fused_ordering(609) 00:17:28.354 fused_ordering(610) 00:17:28.354 fused_ordering(611) 00:17:28.354 fused_ordering(612) 00:17:28.354 fused_ordering(613) 00:17:28.354 fused_ordering(614) 00:17:28.354 fused_ordering(615) 00:17:28.925 fused_ordering(616) 00:17:28.925 fused_ordering(617) 00:17:28.925 fused_ordering(618) 00:17:28.925 fused_ordering(619) 00:17:28.925 fused_ordering(620) 00:17:28.925 fused_ordering(621) 00:17:28.925 fused_ordering(622) 00:17:28.925 fused_ordering(623) 00:17:28.925 fused_ordering(624) 00:17:28.925 fused_ordering(625) 00:17:28.925 fused_ordering(626) 00:17:28.925 fused_ordering(627) 00:17:28.925 fused_ordering(628) 00:17:28.925 fused_ordering(629) 00:17:28.925 fused_ordering(630) 00:17:28.925 fused_ordering(631) 00:17:28.925 fused_ordering(632) 00:17:28.925 fused_ordering(633) 00:17:28.925 fused_ordering(634) 00:17:28.925 fused_ordering(635) 00:17:28.925 fused_ordering(636) 00:17:28.925 fused_ordering(637) 00:17:28.925 fused_ordering(638) 00:17:28.925 fused_ordering(639) 00:17:28.925 fused_ordering(640) 00:17:28.925 fused_ordering(641) 00:17:28.925 fused_ordering(642) 00:17:28.925 fused_ordering(643) 00:17:28.925 fused_ordering(644) 00:17:28.925 fused_ordering(645) 00:17:28.925 fused_ordering(646) 00:17:28.925 fused_ordering(647) 00:17:28.925 fused_ordering(648) 00:17:28.925 fused_ordering(649) 00:17:28.925 fused_ordering(650) 00:17:28.925 fused_ordering(651) 00:17:28.925 fused_ordering(652) 00:17:28.925 fused_ordering(653) 00:17:28.925 fused_ordering(654) 00:17:28.925 fused_ordering(655) 00:17:28.925 fused_ordering(656) 00:17:28.925 fused_ordering(657) 00:17:28.925 fused_ordering(658) 00:17:28.925 fused_ordering(659) 00:17:28.925 fused_ordering(660) 00:17:28.925 fused_ordering(661) 00:17:28.925 fused_ordering(662) 00:17:28.925 fused_ordering(663) 00:17:28.925 fused_ordering(664) 00:17:28.925 fused_ordering(665) 00:17:28.925 fused_ordering(666) 00:17:28.925 fused_ordering(667) 00:17:28.925 fused_ordering(668) 00:17:28.925 fused_ordering(669) 00:17:28.925 fused_ordering(670) 00:17:28.925 fused_ordering(671) 00:17:28.925 fused_ordering(672) 00:17:28.925 fused_ordering(673) 00:17:28.925 fused_ordering(674) 00:17:28.925 fused_ordering(675) 00:17:28.925 fused_ordering(676) 00:17:28.925 fused_ordering(677) 00:17:28.925 fused_ordering(678) 00:17:28.925 fused_ordering(679) 00:17:28.925 fused_ordering(680) 00:17:28.925 fused_ordering(681) 00:17:28.925 fused_ordering(682) 00:17:28.925 fused_ordering(683) 00:17:28.925 fused_ordering(684) 00:17:28.925 fused_ordering(685) 00:17:28.925 fused_ordering(686) 00:17:28.925 fused_ordering(687) 00:17:28.925 fused_ordering(688) 00:17:28.925 fused_ordering(689) 00:17:28.925 fused_ordering(690) 00:17:28.925 fused_ordering(691) 00:17:28.925 fused_ordering(692) 00:17:28.925 fused_ordering(693) 00:17:28.925 fused_ordering(694) 00:17:28.925 fused_ordering(695) 00:17:28.925 fused_ordering(696) 00:17:28.925 fused_ordering(697) 00:17:28.925 fused_ordering(698) 00:17:28.925 fused_ordering(699) 00:17:28.925 fused_ordering(700) 00:17:28.925 fused_ordering(701) 00:17:28.925 fused_ordering(702) 00:17:28.925 fused_ordering(703) 00:17:28.925 fused_ordering(704) 00:17:28.925 fused_ordering(705) 00:17:28.925 fused_ordering(706) 00:17:28.925 fused_ordering(707) 00:17:28.925 fused_ordering(708) 00:17:28.925 fused_ordering(709) 00:17:28.925 fused_ordering(710) 00:17:28.925 fused_ordering(711) 00:17:28.925 fused_ordering(712) 00:17:28.925 fused_ordering(713) 00:17:28.925 fused_ordering(714) 00:17:28.925 fused_ordering(715) 00:17:28.925 fused_ordering(716) 00:17:28.925 fused_ordering(717) 00:17:28.925 fused_ordering(718) 00:17:28.925 fused_ordering(719) 00:17:28.925 fused_ordering(720) 00:17:28.925 fused_ordering(721) 00:17:28.925 fused_ordering(722) 00:17:28.925 fused_ordering(723) 00:17:28.925 fused_ordering(724) 00:17:28.925 fused_ordering(725) 00:17:28.926 fused_ordering(726) 00:17:28.926 fused_ordering(727) 00:17:28.926 fused_ordering(728) 00:17:28.926 fused_ordering(729) 00:17:28.926 fused_ordering(730) 00:17:28.926 fused_ordering(731) 00:17:28.926 fused_ordering(732) 00:17:28.926 fused_ordering(733) 00:17:28.926 fused_ordering(734) 00:17:28.926 fused_ordering(735) 00:17:28.926 fused_ordering(736) 00:17:28.926 fused_ordering(737) 00:17:28.926 fused_ordering(738) 00:17:28.926 fused_ordering(739) 00:17:28.926 fused_ordering(740) 00:17:28.926 fused_ordering(741) 00:17:28.926 fused_ordering(742) 00:17:28.926 fused_ordering(743) 00:17:28.926 fused_ordering(744) 00:17:28.926 fused_ordering(745) 00:17:28.926 fused_ordering(746) 00:17:28.926 fused_ordering(747) 00:17:28.926 fused_ordering(748) 00:17:28.926 fused_ordering(749) 00:17:28.926 fused_ordering(750) 00:17:28.926 fused_ordering(751) 00:17:28.926 fused_ordering(752) 00:17:28.926 fused_ordering(753) 00:17:28.926 fused_ordering(754) 00:17:28.926 fused_ordering(755) 00:17:28.926 fused_ordering(756) 00:17:28.926 fused_ordering(757) 00:17:28.926 fused_ordering(758) 00:17:28.926 fused_ordering(759) 00:17:28.926 fused_ordering(760) 00:17:28.926 fused_ordering(761) 00:17:28.926 fused_ordering(762) 00:17:28.926 fused_ordering(763) 00:17:28.926 fused_ordering(764) 00:17:28.926 fused_ordering(765) 00:17:28.926 fused_ordering(766) 00:17:28.926 fused_ordering(767) 00:17:28.926 fused_ordering(768) 00:17:28.926 fused_ordering(769) 00:17:28.926 fused_ordering(770) 00:17:28.926 fused_ordering(771) 00:17:28.926 fused_ordering(772) 00:17:28.926 fused_ordering(773) 00:17:28.926 fused_ordering(774) 00:17:28.926 fused_ordering(775) 00:17:28.926 fused_ordering(776) 00:17:28.926 fused_ordering(777) 00:17:28.926 fused_ordering(778) 00:17:28.926 fused_ordering(779) 00:17:28.926 fused_ordering(780) 00:17:28.926 fused_ordering(781) 00:17:28.926 fused_ordering(782) 00:17:28.926 fused_ordering(783) 00:17:28.926 fused_ordering(784) 00:17:28.926 fused_ordering(785) 00:17:28.926 fused_ordering(786) 00:17:28.926 fused_ordering(787) 00:17:28.926 fused_ordering(788) 00:17:28.926 fused_ordering(789) 00:17:28.926 fused_ordering(790) 00:17:28.926 fused_ordering(791) 00:17:28.926 fused_ordering(792) 00:17:28.926 fused_ordering(793) 00:17:28.926 fused_ordering(794) 00:17:28.926 fused_ordering(795) 00:17:28.926 fused_ordering(796) 00:17:28.926 fused_ordering(797) 00:17:28.926 fused_ordering(798) 00:17:28.926 fused_ordering(799) 00:17:28.926 fused_ordering(800) 00:17:28.926 fused_ordering(801) 00:17:28.926 fused_ordering(802) 00:17:28.926 fused_ordering(803) 00:17:28.926 fused_ordering(804) 00:17:28.926 fused_ordering(805) 00:17:28.926 fused_ordering(806) 00:17:28.926 fused_ordering(807) 00:17:28.926 fused_ordering(808) 00:17:28.926 fused_ordering(809) 00:17:28.926 fused_ordering(810) 00:17:28.926 fused_ordering(811) 00:17:28.926 fused_ordering(812) 00:17:28.926 fused_ordering(813) 00:17:28.926 fused_ordering(814) 00:17:28.926 fused_ordering(815) 00:17:28.926 fused_ordering(816) 00:17:28.926 fused_ordering(817) 00:17:28.926 fused_ordering(818) 00:17:28.926 fused_ordering(819) 00:17:28.926 fused_ordering(820) 00:17:29.187 fused_ordering(821) 00:17:29.187 fused_ordering(822) 00:17:29.187 fused_ordering(823) 00:17:29.187 fused_ordering(824) 00:17:29.187 fused_ordering(825) 00:17:29.187 fused_ordering(826) 00:17:29.187 fused_ordering(827) 00:17:29.187 fused_ordering(828) 00:17:29.187 fused_ordering(829) 00:17:29.187 fused_ordering(830) 00:17:29.187 fused_ordering(831) 00:17:29.187 fused_ordering(832) 00:17:29.187 fused_ordering(833) 00:17:29.187 fused_ordering(834) 00:17:29.187 fused_ordering(835) 00:17:29.187 fused_ordering(836) 00:17:29.187 fused_ordering(837) 00:17:29.187 fused_ordering(838) 00:17:29.187 fused_ordering(839) 00:17:29.187 fused_ordering(840) 00:17:29.187 fused_ordering(841) 00:17:29.187 fused_ordering(842) 00:17:29.187 fused_ordering(843) 00:17:29.187 fused_ordering(844) 00:17:29.187 fused_ordering(845) 00:17:29.187 fused_ordering(846) 00:17:29.187 fused_ordering(847) 00:17:29.187 fused_ordering(848) 00:17:29.187 fused_ordering(849) 00:17:29.187 fused_ordering(850) 00:17:29.187 fused_ordering(851) 00:17:29.187 fused_ordering(852) 00:17:29.187 fused_ordering(853) 00:17:29.187 fused_ordering(854) 00:17:29.187 fused_ordering(855) 00:17:29.187 fused_ordering(856) 00:17:29.187 fused_ordering(857) 00:17:29.187 fused_ordering(858) 00:17:29.187 fused_ordering(859) 00:17:29.187 fused_ordering(860) 00:17:29.187 fused_ordering(861) 00:17:29.187 fused_ordering(862) 00:17:29.187 fused_ordering(863) 00:17:29.187 fused_ordering(864) 00:17:29.187 fused_ordering(865) 00:17:29.187 fused_ordering(866) 00:17:29.187 fused_ordering(867) 00:17:29.187 fused_ordering(868) 00:17:29.187 fused_ordering(869) 00:17:29.187 fused_ordering(870) 00:17:29.187 fused_ordering(871) 00:17:29.187 fused_ordering(872) 00:17:29.187 fused_ordering(873) 00:17:29.187 fused_ordering(874) 00:17:29.187 fused_ordering(875) 00:17:29.187 fused_ordering(876) 00:17:29.187 fused_ordering(877) 00:17:29.187 fused_ordering(878) 00:17:29.187 fused_ordering(879) 00:17:29.187 fused_ordering(880) 00:17:29.187 fused_ordering(881) 00:17:29.187 fused_ordering(882) 00:17:29.187 fused_ordering(883) 00:17:29.187 fused_ordering(884) 00:17:29.187 fused_ordering(885) 00:17:29.187 fused_ordering(886) 00:17:29.187 fused_ordering(887) 00:17:29.187 fused_ordering(888) 00:17:29.187 fused_ordering(889) 00:17:29.187 fused_ordering(890) 00:17:29.187 fused_ordering(891) 00:17:29.187 fused_ordering(892) 00:17:29.187 fused_ordering(893) 00:17:29.187 fused_ordering(894) 00:17:29.187 fused_ordering(895) 00:17:29.187 fused_ordering(896) 00:17:29.187 fused_ordering(897) 00:17:29.187 fused_ordering(898) 00:17:29.187 fused_ordering(899) 00:17:29.187 fused_ordering(900) 00:17:29.187 fused_ordering(901) 00:17:29.187 fused_ordering(902) 00:17:29.187 fused_ordering(903) 00:17:29.187 fused_ordering(904) 00:17:29.187 fused_ordering(905) 00:17:29.187 fused_ordering(906) 00:17:29.187 fused_ordering(907) 00:17:29.187 fused_ordering(908) 00:17:29.187 fused_ordering(909) 00:17:29.187 fused_ordering(910) 00:17:29.187 fused_ordering(911) 00:17:29.187 fused_ordering(912) 00:17:29.187 fused_ordering(913) 00:17:29.187 fused_ordering(914) 00:17:29.187 fused_ordering(915) 00:17:29.187 fused_ordering(916) 00:17:29.187 fused_ordering(917) 00:17:29.187 fused_ordering(918) 00:17:29.187 fused_ordering(919) 00:17:29.187 fused_ordering(920) 00:17:29.187 fused_ordering(921) 00:17:29.187 fused_ordering(922) 00:17:29.187 fused_ordering(923) 00:17:29.187 fused_ordering(924) 00:17:29.187 fused_ordering(925) 00:17:29.187 fused_ordering(926) 00:17:29.187 fused_ordering(927) 00:17:29.187 fused_ordering(928) 00:17:29.187 fused_ordering(929) 00:17:29.187 fused_ordering(930) 00:17:29.187 fused_ordering(931) 00:17:29.187 fused_ordering(932) 00:17:29.187 fused_ordering(933) 00:17:29.187 fused_ordering(934) 00:17:29.187 fused_ordering(935) 00:17:29.187 fused_ordering(936) 00:17:29.187 fused_ordering(937) 00:17:29.187 fused_ordering(938) 00:17:29.188 fused_ordering(939) 00:17:29.188 fused_ordering(940) 00:17:29.188 fused_ordering(941) 00:17:29.188 fused_ordering(942) 00:17:29.188 fused_ordering(943) 00:17:29.188 fused_ordering(944) 00:17:29.188 fused_ordering(945) 00:17:29.188 fused_ordering(946) 00:17:29.188 fused_ordering(947) 00:17:29.188 fused_ordering(948) 00:17:29.188 fused_ordering(949) 00:17:29.188 fused_ordering(950) 00:17:29.188 fused_ordering(951) 00:17:29.188 fused_ordering(952) 00:17:29.188 fused_ordering(953) 00:17:29.188 fused_ordering(954) 00:17:29.188 fused_ordering(955) 00:17:29.188 fused_ordering(956) 00:17:29.188 fused_ordering(957) 00:17:29.188 fused_ordering(958) 00:17:29.188 fused_ordering(959) 00:17:29.188 fused_ordering(960) 00:17:29.188 fused_ordering(961) 00:17:29.188 fused_ordering(962) 00:17:29.188 fused_ordering(963) 00:17:29.188 fused_ordering(964) 00:17:29.188 fused_ordering(965) 00:17:29.188 fused_ordering(966) 00:17:29.188 fused_ordering(967) 00:17:29.188 fused_ordering(968) 00:17:29.188 fused_ordering(969) 00:17:29.188 fused_ordering(970) 00:17:29.188 fused_ordering(971) 00:17:29.188 fused_ordering(972) 00:17:29.188 fused_ordering(973) 00:17:29.188 fused_ordering(974) 00:17:29.188 fused_ordering(975) 00:17:29.188 fused_ordering(976) 00:17:29.188 fused_ordering(977) 00:17:29.188 fused_ordering(978) 00:17:29.188 fused_ordering(979) 00:17:29.188 fused_ordering(980) 00:17:29.188 fused_ordering(981) 00:17:29.188 fused_ordering(982) 00:17:29.188 fused_ordering(983) 00:17:29.188 fused_ordering(984) 00:17:29.188 fused_ordering(985) 00:17:29.188 fused_ordering(986) 00:17:29.188 fused_ordering(987) 00:17:29.188 fused_ordering(988) 00:17:29.188 fused_ordering(989) 00:17:29.188 fused_ordering(990) 00:17:29.188 fused_ordering(991) 00:17:29.188 fused_ordering(992) 00:17:29.188 fused_ordering(993) 00:17:29.188 fused_ordering(994) 00:17:29.188 fused_ordering(995) 00:17:29.188 fused_ordering(996) 00:17:29.188 fused_ordering(997) 00:17:29.188 fused_ordering(998) 00:17:29.188 fused_ordering(999) 00:17:29.188 fused_ordering(1000) 00:17:29.188 fused_ordering(1001) 00:17:29.188 fused_ordering(1002) 00:17:29.188 fused_ordering(1003) 00:17:29.188 fused_ordering(1004) 00:17:29.188 fused_ordering(1005) 00:17:29.188 fused_ordering(1006) 00:17:29.188 fused_ordering(1007) 00:17:29.188 fused_ordering(1008) 00:17:29.188 fused_ordering(1009) 00:17:29.188 fused_ordering(1010) 00:17:29.188 fused_ordering(1011) 00:17:29.188 fused_ordering(1012) 00:17:29.188 fused_ordering(1013) 00:17:29.188 fused_ordering(1014) 00:17:29.188 fused_ordering(1015) 00:17:29.188 fused_ordering(1016) 00:17:29.188 fused_ordering(1017) 00:17:29.188 fused_ordering(1018) 00:17:29.188 fused_ordering(1019) 00:17:29.188 fused_ordering(1020) 00:17:29.188 fused_ordering(1021) 00:17:29.188 fused_ordering(1022) 00:17:29.188 fused_ordering(1023) 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.449 rmmod nvme_tcp 00:17:29.449 rmmod nvme_fabrics 00:17:29.449 rmmod nvme_keyring 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 218819 ']' 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 218819 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 218819 ']' 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 218819 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 218819 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 218819' 00:17:29.449 killing process with pid 218819 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 218819 00:17:29.449 11:11:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 218819 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.710 11:11:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:31.619 00:17:31.619 real 0m7.548s 00:17:31.619 user 0m4.853s 00:17:31.619 sys 0m3.061s 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:31.619 ************************************ 00:17:31.619 END TEST nvmf_fused_ordering 00:17:31.619 ************************************ 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:31.619 ************************************ 00:17:31.619 START TEST nvmf_ns_masking 00:17:31.619 ************************************ 00:17:31.619 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:31.878 * Looking for test storage... 00:17:31.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:31.878 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.879 --rc genhtml_branch_coverage=1 00:17:31.879 --rc genhtml_function_coverage=1 00:17:31.879 --rc genhtml_legend=1 00:17:31.879 --rc geninfo_all_blocks=1 00:17:31.879 --rc geninfo_unexecuted_blocks=1 00:17:31.879 00:17:31.879 ' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.879 --rc genhtml_branch_coverage=1 00:17:31.879 --rc genhtml_function_coverage=1 00:17:31.879 --rc genhtml_legend=1 00:17:31.879 --rc geninfo_all_blocks=1 00:17:31.879 --rc geninfo_unexecuted_blocks=1 00:17:31.879 00:17:31.879 ' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.879 --rc genhtml_branch_coverage=1 00:17:31.879 --rc genhtml_function_coverage=1 00:17:31.879 --rc genhtml_legend=1 00:17:31.879 --rc geninfo_all_blocks=1 00:17:31.879 --rc geninfo_unexecuted_blocks=1 00:17:31.879 00:17:31.879 ' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:31.879 --rc genhtml_branch_coverage=1 00:17:31.879 --rc genhtml_function_coverage=1 00:17:31.879 --rc genhtml_legend=1 00:17:31.879 --rc geninfo_all_blocks=1 00:17:31.879 --rc geninfo_unexecuted_blocks=1 00:17:31.879 00:17:31.879 ' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:31.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a1451322-cdbb-4ebc-9d9c-9cdfdcb55984 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=91c36f90-e3d1-4c82-9ec6-d9e068747280 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=38517f83-7251-432a-8ea1-ef16d41c099f 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:31.879 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:31.880 11:11:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:34.413 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:34.413 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:34.413 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:34.413 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.413 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:17:34.414 00:17:34.414 --- 10.0.0.2 ping statistics --- 00:17:34.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.414 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:17:34.414 00:17:34.414 --- 10.0.0.1 ping statistics --- 00:17:34.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.414 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=221172 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 221172 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 221172 ']' 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:34.414 [2024-11-17 11:11:58.721135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:34.414 [2024-11-17 11:11:58.721228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.414 [2024-11-17 11:11:58.791094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.414 [2024-11-17 11:11:58.832967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.414 [2024-11-17 11:11:58.833033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.414 [2024-11-17 11:11:58.833047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.414 [2024-11-17 11:11:58.833057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.414 [2024-11-17 11:11:58.833067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.414 [2024-11-17 11:11:58.833700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.414 11:11:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:34.672 [2024-11-17 11:11:59.273689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.672 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:34.672 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:34.672 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:35.238 Malloc1 00:17:35.238 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:35.496 Malloc2 00:17:35.496 11:11:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:35.755 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:36.013 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.272 [2024-11-17 11:12:00.830382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.272 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:36.272 11:12:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38517f83-7251-432a-8ea1-ef16d41c099f -a 10.0.0.2 -s 4420 -i 4 00:17:36.532 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:36.532 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:36.532 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.532 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:36.532 11:12:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:38.434 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:38.434 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:38.434 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.434 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:38.434 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.434 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:38.435 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:38.435 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:38.693 [ 0]:0x1 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=959891b491734b6eb6f212a65f980ebe 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 959891b491734b6eb6f212a65f980ebe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.693 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:38.952 [ 0]:0x1 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=959891b491734b6eb6f212a65f980ebe 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 959891b491734b6eb6f212a65f980ebe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:38.952 [ 1]:0x2 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:38.952 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.211 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:39.211 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.211 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:39.211 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:39.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.211 11:12:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:39.469 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:39.729 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:39.730 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38517f83-7251-432a-8ea1-ef16d41c099f -a 10.0.0.2 -s 4420 -i 4 00:17:39.989 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:39.989 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:39.989 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.989 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:39.989 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:39.989 11:12:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:41.898 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.157 [ 0]:0x2 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.157 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.415 [ 0]:0x1 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=959891b491734b6eb6f212a65f980ebe 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 959891b491734b6eb6f212a65f980ebe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.415 [ 1]:0x2 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.415 11:12:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.415 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:42.415 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.415 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:42.673 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:42.673 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.674 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.932 [ 0]:0x2 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.932 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.501 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:43.501 11:12:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38517f83-7251-432a-8ea1-ef16d41c099f -a 10.0.0.2 -s 4420 -i 4 00:17:43.501 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:43.501 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:43.501 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:43.501 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:43.501 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:43.501 11:12:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.038 [ 0]:0x1 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=959891b491734b6eb6f212a65f980ebe 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 959891b491734b6eb6f212a65f980ebe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.038 [ 1]:0x2 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.038 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.297 [ 0]:0x2 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:46.297 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:46.298 11:12:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:46.557 [2024-11-17 11:12:11.174265] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:46.557 request: 00:17:46.557 { 00:17:46.557 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.557 "nsid": 2, 00:17:46.557 "host": "nqn.2016-06.io.spdk:host1", 00:17:46.557 "method": "nvmf_ns_remove_host", 00:17:46.557 "req_id": 1 00:17:46.557 } 00:17:46.557 Got JSON-RPC error response 00:17:46.557 response: 00:17:46.557 { 00:17:46.557 "code": -32602, 00:17:46.557 "message": "Invalid parameters" 00:17:46.557 } 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:46.557 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:46.817 [ 0]:0x2 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f669451dc1344f9ae15ab58fe9be45f 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f669451dc1344f9ae15ab58fe9be45f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:46.817 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=223410 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 223410 /var/tmp/host.sock 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 223410 ']' 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:47.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.076 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:47.076 [2024-11-17 11:12:11.536170] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:47.076 [2024-11-17 11:12:11.536249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223410 ] 00:17:47.076 [2024-11-17 11:12:11.602706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.076 [2024-11-17 11:12:11.648606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.335 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.335 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:47.335 11:12:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.593 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:47.852 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a1451322-cdbb-4ebc-9d9c-9cdfdcb55984 00:17:47.852 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:47.852 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A1451322CDBB4EBC9D9C9CDFDCB55984 -i 00:17:48.418 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 91c36f90-e3d1-4c82-9ec6-d9e068747280 00:17:48.418 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:48.418 11:12:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 91C36F90E3D14C829EC6D9E068747280 -i 00:17:48.676 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:48.935 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:49.193 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:49.193 11:12:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:49.453 nvme0n1 00:17:49.711 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:49.711 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:49.969 nvme1n2 00:17:50.229 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:50.229 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:50.229 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:50.229 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:50.229 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:50.489 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:50.489 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:50.489 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:50.489 11:12:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:50.748 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a1451322-cdbb-4ebc-9d9c-9cdfdcb55984 == \a\1\4\5\1\3\2\2\-\c\d\b\b\-\4\e\b\c\-\9\d\9\c\-\9\c\d\f\d\c\b\5\5\9\8\4 ]] 00:17:50.748 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:50.748 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:50.748 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:51.006 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 91c36f90-e3d1-4c82-9ec6-d9e068747280 == \9\1\c\3\6\f\9\0\-\e\3\d\1\-\4\c\8\2\-\9\e\c\6\-\d\9\e\0\6\8\7\4\7\2\8\0 ]] 00:17:51.006 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:51.265 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a1451322-cdbb-4ebc-9d9c-9cdfdcb55984 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A1451322CDBB4EBC9D9C9CDFDCB55984 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A1451322CDBB4EBC9D9C9CDFDCB55984 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:51.523 11:12:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A1451322CDBB4EBC9D9C9CDFDCB55984 00:17:51.782 [2024-11-17 11:12:16.236735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:51.782 [2024-11-17 11:12:16.236780] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:51.782 [2024-11-17 11:12:16.236826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:51.782 request: 00:17:51.782 { 00:17:51.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.782 "namespace": { 00:17:51.782 "bdev_name": "invalid", 00:17:51.782 "nsid": 1, 00:17:51.782 "nguid": "A1451322CDBB4EBC9D9C9CDFDCB55984", 00:17:51.782 "no_auto_visible": false 00:17:51.782 }, 00:17:51.782 "method": "nvmf_subsystem_add_ns", 00:17:51.782 "req_id": 1 00:17:51.782 } 00:17:51.782 Got JSON-RPC error response 00:17:51.782 response: 00:17:51.782 { 00:17:51.782 "code": -32602, 00:17:51.782 "message": "Invalid parameters" 00:17:51.782 } 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a1451322-cdbb-4ebc-9d9c-9cdfdcb55984 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:51.782 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A1451322CDBB4EBC9D9C9CDFDCB55984 -i 00:17:52.041 11:12:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:53.951 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:53.951 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:53.951 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 223410 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 223410 ']' 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 223410 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.209 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223410 00:17:54.467 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:54.467 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:54.467 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223410' 00:17:54.467 killing process with pid 223410 00:17:54.467 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 223410 00:17:54.467 11:12:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 223410 00:17:54.724 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.984 rmmod nvme_tcp 00:17:54.984 rmmod nvme_fabrics 00:17:54.984 rmmod nvme_keyring 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 221172 ']' 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 221172 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 221172 ']' 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 221172 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221172 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221172' 00:17:54.984 killing process with pid 221172 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 221172 00:17:54.984 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 221172 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.243 11:12:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:57.781 00:17:57.781 real 0m25.661s 00:17:57.781 user 0m37.119s 00:17:57.781 sys 0m4.826s 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:57.781 ************************************ 00:17:57.781 END TEST nvmf_ns_masking 00:17:57.781 ************************************ 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.781 ************************************ 00:17:57.781 START TEST nvmf_nvme_cli 00:17:57.781 ************************************ 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:57.781 * Looking for test storage... 00:17:57.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:57.781 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:57.782 11:12:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.782 --rc genhtml_branch_coverage=1 00:17:57.782 --rc genhtml_function_coverage=1 00:17:57.782 --rc genhtml_legend=1 00:17:57.782 --rc geninfo_all_blocks=1 00:17:57.782 --rc geninfo_unexecuted_blocks=1 00:17:57.782 00:17:57.782 ' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.782 --rc genhtml_branch_coverage=1 00:17:57.782 --rc genhtml_function_coverage=1 00:17:57.782 --rc genhtml_legend=1 00:17:57.782 --rc geninfo_all_blocks=1 00:17:57.782 --rc geninfo_unexecuted_blocks=1 00:17:57.782 00:17:57.782 ' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.782 --rc genhtml_branch_coverage=1 00:17:57.782 --rc genhtml_function_coverage=1 00:17:57.782 --rc genhtml_legend=1 00:17:57.782 --rc geninfo_all_blocks=1 00:17:57.782 --rc geninfo_unexecuted_blocks=1 00:17:57.782 00:17:57.782 ' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:57.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.782 --rc genhtml_branch_coverage=1 00:17:57.782 --rc genhtml_function_coverage=1 00:17:57.782 --rc genhtml_legend=1 00:17:57.782 --rc geninfo_all_blocks=1 00:17:57.782 --rc geninfo_unexecuted_blocks=1 00:17:57.782 00:17:57.782 ' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.782 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:57.783 11:12:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.692 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:59.693 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:59.693 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:59.693 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.693 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:59.694 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.694 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:17:59.695 00:17:59.695 --- 10.0.0.2 ping statistics --- 00:17:59.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.695 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:17:59.695 00:17:59.695 --- 10.0.0.1 ping statistics --- 00:17:59.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.695 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.695 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=226321 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 226321 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 226321 ']' 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.696 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.971 [2024-11-17 11:12:24.349632] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:17:59.971 [2024-11-17 11:12:24.349731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.971 [2024-11-17 11:12:24.426520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:59.971 [2024-11-17 11:12:24.476137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.971 [2024-11-17 11:12:24.476205] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.971 [2024-11-17 11:12:24.476233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.971 [2024-11-17 11:12:24.476245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.971 [2024-11-17 11:12:24.476254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.971 [2024-11-17 11:12:24.477884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.971 [2024-11-17 11:12:24.477943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.971 [2024-11-17 11:12:24.478009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.971 [2024-11-17 11:12:24.478012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.971 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 [2024-11-17 11:12:24.632357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 Malloc0 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 Malloc1 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 [2024-11-17 11:12:24.735298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:00.231 00:18:00.231 Discovery Log Number of Records 2, Generation counter 2 00:18:00.231 =====Discovery Log Entry 0====== 00:18:00.231 trtype: tcp 00:18:00.231 adrfam: ipv4 00:18:00.231 subtype: current discovery subsystem 00:18:00.231 treq: not required 00:18:00.231 portid: 0 00:18:00.231 trsvcid: 4420 00:18:00.231 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:00.231 traddr: 10.0.0.2 00:18:00.231 eflags: explicit discovery connections, duplicate discovery information 00:18:00.231 sectype: none 00:18:00.231 =====Discovery Log Entry 1====== 00:18:00.231 trtype: tcp 00:18:00.231 adrfam: ipv4 00:18:00.231 subtype: nvme subsystem 00:18:00.231 treq: not required 00:18:00.231 portid: 0 00:18:00.231 trsvcid: 4420 00:18:00.231 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:00.231 traddr: 10.0.0.2 00:18:00.231 eflags: none 00:18:00.231 sectype: none 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.231 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:00.490 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:00.490 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.490 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.490 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.490 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:00.490 11:12:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.062 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:01.062 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:01.062 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.062 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:01.062 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:01.062 11:12:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:02.969 /dev/nvme0n2 ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:02.969 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:03.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:03.230 rmmod nvme_tcp 00:18:03.230 rmmod nvme_fabrics 00:18:03.230 rmmod nvme_keyring 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 226321 ']' 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 226321 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 226321 ']' 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 226321 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 226321 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 226321' 00:18:03.230 killing process with pid 226321 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 226321 00:18:03.230 11:12:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 226321 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:03.490 11:12:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:06.032 00:18:06.032 real 0m8.136s 00:18:06.032 user 0m14.802s 00:18:06.032 sys 0m2.248s 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:06.032 ************************************ 00:18:06.032 END TEST nvmf_nvme_cli 00:18:06.032 ************************************ 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:06.032 ************************************ 00:18:06.032 START TEST nvmf_vfio_user 00:18:06.032 ************************************ 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:06.032 * Looking for test storage... 00:18:06.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.032 --rc genhtml_branch_coverage=1 00:18:06.032 --rc genhtml_function_coverage=1 00:18:06.032 --rc genhtml_legend=1 00:18:06.032 --rc geninfo_all_blocks=1 00:18:06.032 --rc geninfo_unexecuted_blocks=1 00:18:06.032 00:18:06.032 ' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.032 --rc genhtml_branch_coverage=1 00:18:06.032 --rc genhtml_function_coverage=1 00:18:06.032 --rc genhtml_legend=1 00:18:06.032 --rc geninfo_all_blocks=1 00:18:06.032 --rc geninfo_unexecuted_blocks=1 00:18:06.032 00:18:06.032 ' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.032 --rc genhtml_branch_coverage=1 00:18:06.032 --rc genhtml_function_coverage=1 00:18:06.032 --rc genhtml_legend=1 00:18:06.032 --rc geninfo_all_blocks=1 00:18:06.032 --rc geninfo_unexecuted_blocks=1 00:18:06.032 00:18:06.032 ' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:06.032 --rc genhtml_branch_coverage=1 00:18:06.032 --rc genhtml_function_coverage=1 00:18:06.032 --rc genhtml_legend=1 00:18:06.032 --rc geninfo_all_blocks=1 00:18:06.032 --rc geninfo_unexecuted_blocks=1 00:18:06.032 00:18:06.032 ' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.032 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:06.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=227243 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 227243' 00:18:06.033 Process pid: 227243 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 227243 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 227243 ']' 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:06.033 [2024-11-17 11:12:30.360133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:06.033 [2024-11-17 11:12:30.360208] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.033 [2024-11-17 11:12:30.428187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.033 [2024-11-17 11:12:30.473227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.033 [2024-11-17 11:12:30.473286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.033 [2024-11-17 11:12:30.473314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.033 [2024-11-17 11:12:30.473325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.033 [2024-11-17 11:12:30.473334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.033 [2024-11-17 11:12:30.474784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.033 [2024-11-17 11:12:30.474844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.033 [2024-11-17 11:12:30.474911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.033 [2024-11-17 11:12:30.474914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:06.033 11:12:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:06.971 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:07.230 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:07.489 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:07.489 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:07.489 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:07.489 11:12:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:07.748 Malloc1 00:18:07.748 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:08.061 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:08.353 11:12:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:08.634 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:08.634 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:08.634 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:08.929 Malloc2 00:18:08.929 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:09.216 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:09.216 11:12:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:09.509 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:09.509 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:09.509 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:09.509 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:09.509 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:09.509 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:09.792 [2024-11-17 11:12:34.162941] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:09.792 [2024-11-17 11:12:34.162984] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid227677 ] 00:18:09.792 [2024-11-17 11:12:34.212759] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:09.792 [2024-11-17 11:12:34.221064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.792 [2024-11-17 11:12:34.221092] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f64498c8000 00:18:09.792 [2024-11-17 11:12:34.222051] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.223043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.224048] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.225059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.226059] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.227063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.228067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.229074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:09.792 [2024-11-17 11:12:34.230083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:09.792 [2024-11-17 11:12:34.230103] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f64485c0000 00:18:09.792 [2024-11-17 11:12:34.231217] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.792 [2024-11-17 11:12:34.246931] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:09.792 [2024-11-17 11:12:34.246971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:09.792 [2024-11-17 11:12:34.249199] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:09.792 [2024-11-17 11:12:34.249258] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:09.792 [2024-11-17 11:12:34.249358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:09.792 [2024-11-17 11:12:34.249391] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:09.792 [2024-11-17 11:12:34.249403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:09.792 [2024-11-17 11:12:34.250534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:09.792 [2024-11-17 11:12:34.250557] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:09.792 [2024-11-17 11:12:34.250570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:09.792 [2024-11-17 11:12:34.251195] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:09.792 [2024-11-17 11:12:34.251215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:09.792 [2024-11-17 11:12:34.251228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:09.792 [2024-11-17 11:12:34.252198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:09.792 [2024-11-17 11:12:34.252217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:09.792 [2024-11-17 11:12:34.253202] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:09.792 [2024-11-17 11:12:34.253221] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:09.792 [2024-11-17 11:12:34.253230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:09.792 [2024-11-17 11:12:34.253241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:09.792 [2024-11-17 11:12:34.253352] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:09.792 [2024-11-17 11:12:34.253360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:09.792 [2024-11-17 11:12:34.253369] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:09.793 [2024-11-17 11:12:34.257535] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:09.793 [2024-11-17 11:12:34.258231] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:09.793 [2024-11-17 11:12:34.259235] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:09.793 [2024-11-17 11:12:34.260230] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.793 [2024-11-17 11:12:34.260386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:09.793 [2024-11-17 11:12:34.261241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:09.793 [2024-11-17 11:12:34.261259] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:09.793 [2024-11-17 11:12:34.261268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261293] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:09.793 [2024-11-17 11:12:34.261309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261339] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.793 [2024-11-17 11:12:34.261349] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.793 [2024-11-17 11:12:34.261356] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.793 [2024-11-17 11:12:34.261379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.261452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.261472] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:09.793 [2024-11-17 11:12:34.261484] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:09.793 [2024-11-17 11:12:34.261492] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:09.793 [2024-11-17 11:12:34.261501] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:09.793 [2024-11-17 11:12:34.261536] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:09.793 [2024-11-17 11:12:34.261548] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:09.793 [2024-11-17 11:12:34.261556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.261608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.261626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.793 [2024-11-17 11:12:34.261638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.793 [2024-11-17 11:12:34.261650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.793 [2024-11-17 11:12:34.261662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:09.793 [2024-11-17 11:12:34.261670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.261709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.261724] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:09.793 [2024-11-17 11:12:34.261734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.261785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.261866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261902] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:09.793 [2024-11-17 11:12:34.261910] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:09.793 [2024-11-17 11:12:34.261916] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.793 [2024-11-17 11:12:34.261925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.261939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.261959] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:09.793 [2024-11-17 11:12:34.261980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.261996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262008] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.793 [2024-11-17 11:12:34.262016] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.793 [2024-11-17 11:12:34.262022] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.793 [2024-11-17 11:12:34.262031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.262058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.262084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262111] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:09.793 [2024-11-17 11:12:34.262118] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.793 [2024-11-17 11:12:34.262124] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.793 [2024-11-17 11:12:34.262133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.262147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.262162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262231] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:09.793 [2024-11-17 11:12:34.262238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:09.793 [2024-11-17 11:12:34.262247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:09.793 [2024-11-17 11:12:34.262278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.262296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.262316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.262328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.262343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.262358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.262373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:09.793 [2024-11-17 11:12:34.262384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:09.793 [2024-11-17 11:12:34.262407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:09.793 [2024-11-17 11:12:34.262417] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:09.794 [2024-11-17 11:12:34.262424] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:09.794 [2024-11-17 11:12:34.262429] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:09.794 [2024-11-17 11:12:34.262435] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:09.794 [2024-11-17 11:12:34.262444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:09.794 [2024-11-17 11:12:34.262456] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:09.794 [2024-11-17 11:12:34.262463] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:09.794 [2024-11-17 11:12:34.262469] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.794 [2024-11-17 11:12:34.262478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:09.794 [2024-11-17 11:12:34.262489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:09.794 [2024-11-17 11:12:34.262496] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:09.794 [2024-11-17 11:12:34.262517] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.794 [2024-11-17 11:12:34.262535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:09.794 [2024-11-17 11:12:34.262549] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:09.794 [2024-11-17 11:12:34.262558] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:09.794 [2024-11-17 11:12:34.262569] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:09.794 [2024-11-17 11:12:34.262579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:09.794 [2024-11-17 11:12:34.262591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:09.794 [2024-11-17 11:12:34.262612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:09.794 [2024-11-17 11:12:34.262629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:09.794 [2024-11-17 11:12:34.262641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:09.794 ===================================================== 00:18:09.794 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:09.794 ===================================================== 00:18:09.794 Controller Capabilities/Features 00:18:09.794 ================================ 00:18:09.794 Vendor ID: 4e58 00:18:09.794 Subsystem Vendor ID: 4e58 00:18:09.794 Serial Number: SPDK1 00:18:09.794 Model Number: SPDK bdev Controller 00:18:09.794 Firmware Version: 25.01 00:18:09.794 Recommended Arb Burst: 6 00:18:09.794 IEEE OUI Identifier: 8d 6b 50 00:18:09.794 Multi-path I/O 00:18:09.794 May have multiple subsystem ports: Yes 00:18:09.794 May have multiple controllers: Yes 00:18:09.794 Associated with SR-IOV VF: No 00:18:09.794 Max Data Transfer Size: 131072 00:18:09.794 Max Number of Namespaces: 32 00:18:09.794 Max Number of I/O Queues: 127 00:18:09.794 NVMe Specification Version (VS): 1.3 00:18:09.794 NVMe Specification Version (Identify): 1.3 00:18:09.794 Maximum Queue Entries: 256 00:18:09.794 Contiguous Queues Required: Yes 00:18:09.794 Arbitration Mechanisms Supported 00:18:09.794 Weighted Round Robin: Not Supported 00:18:09.794 Vendor Specific: Not Supported 00:18:09.794 Reset Timeout: 15000 ms 00:18:09.794 Doorbell Stride: 4 bytes 00:18:09.794 NVM Subsystem Reset: Not Supported 00:18:09.794 Command Sets Supported 00:18:09.794 NVM Command Set: Supported 00:18:09.794 Boot Partition: Not Supported 00:18:09.794 Memory Page Size Minimum: 4096 bytes 00:18:09.794 Memory Page Size Maximum: 4096 bytes 00:18:09.794 Persistent Memory Region: Not Supported 00:18:09.794 Optional Asynchronous Events Supported 00:18:09.794 Namespace Attribute Notices: Supported 00:18:09.794 Firmware Activation Notices: Not Supported 00:18:09.794 ANA Change Notices: Not Supported 00:18:09.794 PLE Aggregate Log Change Notices: Not Supported 00:18:09.794 LBA Status Info Alert Notices: Not Supported 00:18:09.794 EGE Aggregate Log Change Notices: Not Supported 00:18:09.794 Normal NVM Subsystem Shutdown event: Not Supported 00:18:09.794 Zone Descriptor Change Notices: Not Supported 00:18:09.794 Discovery Log Change Notices: Not Supported 00:18:09.794 Controller Attributes 00:18:09.794 128-bit Host Identifier: Supported 00:18:09.794 Non-Operational Permissive Mode: Not Supported 00:18:09.794 NVM Sets: Not Supported 00:18:09.794 Read Recovery Levels: Not Supported 00:18:09.794 Endurance Groups: Not Supported 00:18:09.794 Predictable Latency Mode: Not Supported 00:18:09.794 Traffic Based Keep ALive: Not Supported 00:18:09.794 Namespace Granularity: Not Supported 00:18:09.794 SQ Associations: Not Supported 00:18:09.794 UUID List: Not Supported 00:18:09.794 Multi-Domain Subsystem: Not Supported 00:18:09.794 Fixed Capacity Management: Not Supported 00:18:09.794 Variable Capacity Management: Not Supported 00:18:09.794 Delete Endurance Group: Not Supported 00:18:09.794 Delete NVM Set: Not Supported 00:18:09.794 Extended LBA Formats Supported: Not Supported 00:18:09.794 Flexible Data Placement Supported: Not Supported 00:18:09.794 00:18:09.794 Controller Memory Buffer Support 00:18:09.794 ================================ 00:18:09.794 Supported: No 00:18:09.794 00:18:09.794 Persistent Memory Region Support 00:18:09.794 ================================ 00:18:09.794 Supported: No 00:18:09.794 00:18:09.794 Admin Command Set Attributes 00:18:09.794 ============================ 00:18:09.794 Security Send/Receive: Not Supported 00:18:09.794 Format NVM: Not Supported 00:18:09.794 Firmware Activate/Download: Not Supported 00:18:09.794 Namespace Management: Not Supported 00:18:09.794 Device Self-Test: Not Supported 00:18:09.794 Directives: Not Supported 00:18:09.794 NVMe-MI: Not Supported 00:18:09.794 Virtualization Management: Not Supported 00:18:09.794 Doorbell Buffer Config: Not Supported 00:18:09.794 Get LBA Status Capability: Not Supported 00:18:09.794 Command & Feature Lockdown Capability: Not Supported 00:18:09.794 Abort Command Limit: 4 00:18:09.794 Async Event Request Limit: 4 00:18:09.794 Number of Firmware Slots: N/A 00:18:09.794 Firmware Slot 1 Read-Only: N/A 00:18:09.794 Firmware Activation Without Reset: N/A 00:18:09.794 Multiple Update Detection Support: N/A 00:18:09.794 Firmware Update Granularity: No Information Provided 00:18:09.794 Per-Namespace SMART Log: No 00:18:09.794 Asymmetric Namespace Access Log Page: Not Supported 00:18:09.794 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:09.794 Command Effects Log Page: Supported 00:18:09.794 Get Log Page Extended Data: Supported 00:18:09.794 Telemetry Log Pages: Not Supported 00:18:09.794 Persistent Event Log Pages: Not Supported 00:18:09.794 Supported Log Pages Log Page: May Support 00:18:09.794 Commands Supported & Effects Log Page: Not Supported 00:18:09.794 Feature Identifiers & Effects Log Page:May Support 00:18:09.794 NVMe-MI Commands & Effects Log Page: May Support 00:18:09.794 Data Area 4 for Telemetry Log: Not Supported 00:18:09.794 Error Log Page Entries Supported: 128 00:18:09.794 Keep Alive: Supported 00:18:09.794 Keep Alive Granularity: 10000 ms 00:18:09.794 00:18:09.794 NVM Command Set Attributes 00:18:09.794 ========================== 00:18:09.794 Submission Queue Entry Size 00:18:09.794 Max: 64 00:18:09.794 Min: 64 00:18:09.794 Completion Queue Entry Size 00:18:09.794 Max: 16 00:18:09.794 Min: 16 00:18:09.794 Number of Namespaces: 32 00:18:09.794 Compare Command: Supported 00:18:09.794 Write Uncorrectable Command: Not Supported 00:18:09.794 Dataset Management Command: Supported 00:18:09.794 Write Zeroes Command: Supported 00:18:09.794 Set Features Save Field: Not Supported 00:18:09.794 Reservations: Not Supported 00:18:09.794 Timestamp: Not Supported 00:18:09.794 Copy: Supported 00:18:09.794 Volatile Write Cache: Present 00:18:09.794 Atomic Write Unit (Normal): 1 00:18:09.794 Atomic Write Unit (PFail): 1 00:18:09.794 Atomic Compare & Write Unit: 1 00:18:09.794 Fused Compare & Write: Supported 00:18:09.794 Scatter-Gather List 00:18:09.794 SGL Command Set: Supported (Dword aligned) 00:18:09.794 SGL Keyed: Not Supported 00:18:09.794 SGL Bit Bucket Descriptor: Not Supported 00:18:09.794 SGL Metadata Pointer: Not Supported 00:18:09.794 Oversized SGL: Not Supported 00:18:09.794 SGL Metadata Address: Not Supported 00:18:09.794 SGL Offset: Not Supported 00:18:09.794 Transport SGL Data Block: Not Supported 00:18:09.794 Replay Protected Memory Block: Not Supported 00:18:09.794 00:18:09.794 Firmware Slot Information 00:18:09.794 ========================= 00:18:09.794 Active slot: 1 00:18:09.794 Slot 1 Firmware Revision: 25.01 00:18:09.794 00:18:09.794 00:18:09.794 Commands Supported and Effects 00:18:09.794 ============================== 00:18:09.794 Admin Commands 00:18:09.794 -------------- 00:18:09.794 Get Log Page (02h): Supported 00:18:09.794 Identify (06h): Supported 00:18:09.795 Abort (08h): Supported 00:18:09.795 Set Features (09h): Supported 00:18:09.795 Get Features (0Ah): Supported 00:18:09.795 Asynchronous Event Request (0Ch): Supported 00:18:09.795 Keep Alive (18h): Supported 00:18:09.795 I/O Commands 00:18:09.795 ------------ 00:18:09.795 Flush (00h): Supported LBA-Change 00:18:09.795 Write (01h): Supported LBA-Change 00:18:09.795 Read (02h): Supported 00:18:09.795 Compare (05h): Supported 00:18:09.795 Write Zeroes (08h): Supported LBA-Change 00:18:09.795 Dataset Management (09h): Supported LBA-Change 00:18:09.795 Copy (19h): Supported LBA-Change 00:18:09.795 00:18:09.795 Error Log 00:18:09.795 ========= 00:18:09.795 00:18:09.795 Arbitration 00:18:09.795 =========== 00:18:09.795 Arbitration Burst: 1 00:18:09.795 00:18:09.795 Power Management 00:18:09.795 ================ 00:18:09.795 Number of Power States: 1 00:18:09.795 Current Power State: Power State #0 00:18:09.795 Power State #0: 00:18:09.795 Max Power: 0.00 W 00:18:09.795 Non-Operational State: Operational 00:18:09.795 Entry Latency: Not Reported 00:18:09.795 Exit Latency: Not Reported 00:18:09.795 Relative Read Throughput: 0 00:18:09.795 Relative Read Latency: 0 00:18:09.795 Relative Write Throughput: 0 00:18:09.795 Relative Write Latency: 0 00:18:09.795 Idle Power: Not Reported 00:18:09.795 Active Power: Not Reported 00:18:09.795 Non-Operational Permissive Mode: Not Supported 00:18:09.795 00:18:09.795 Health Information 00:18:09.795 ================== 00:18:09.795 Critical Warnings: 00:18:09.795 Available Spare Space: OK 00:18:09.795 Temperature: OK 00:18:09.795 Device Reliability: OK 00:18:09.795 Read Only: No 00:18:09.795 Volatile Memory Backup: OK 00:18:09.795 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:09.795 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:09.795 Available Spare: 0% 00:18:09.795 Available Sp[2024-11-17 11:12:34.262767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:09.795 [2024-11-17 11:12:34.262784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:09.795 [2024-11-17 11:12:34.262848] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:09.795 [2024-11-17 11:12:34.262867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.795 [2024-11-17 11:12:34.262878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.795 [2024-11-17 11:12:34.262888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.795 [2024-11-17 11:12:34.262897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:09.795 [2024-11-17 11:12:34.263257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:09.795 [2024-11-17 11:12:34.263280] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:09.795 [2024-11-17 11:12:34.264258] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.795 [2024-11-17 11:12:34.264353] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:09.795 [2024-11-17 11:12:34.264367] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:09.795 [2024-11-17 11:12:34.265268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:09.795 [2024-11-17 11:12:34.265307] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:09.795 [2024-11-17 11:12:34.265365] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:09.795 [2024-11-17 11:12:34.267313] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:09.795 are Threshold: 0% 00:18:09.795 Life Percentage Used: 0% 00:18:09.795 Data Units Read: 0 00:18:09.795 Data Units Written: 0 00:18:09.795 Host Read Commands: 0 00:18:09.795 Host Write Commands: 0 00:18:09.795 Controller Busy Time: 0 minutes 00:18:09.795 Power Cycles: 0 00:18:09.795 Power On Hours: 0 hours 00:18:09.795 Unsafe Shutdowns: 0 00:18:09.795 Unrecoverable Media Errors: 0 00:18:09.795 Lifetime Error Log Entries: 0 00:18:09.795 Warning Temperature Time: 0 minutes 00:18:09.795 Critical Temperature Time: 0 minutes 00:18:09.795 00:18:09.795 Number of Queues 00:18:09.795 ================ 00:18:09.795 Number of I/O Submission Queues: 127 00:18:09.795 Number of I/O Completion Queues: 127 00:18:09.795 00:18:09.795 Active Namespaces 00:18:09.795 ================= 00:18:09.795 Namespace ID:1 00:18:09.795 Error Recovery Timeout: Unlimited 00:18:09.795 Command Set Identifier: NVM (00h) 00:18:09.795 Deallocate: Supported 00:18:09.795 Deallocated/Unwritten Error: Not Supported 00:18:09.795 Deallocated Read Value: Unknown 00:18:09.795 Deallocate in Write Zeroes: Not Supported 00:18:09.795 Deallocated Guard Field: 0xFFFF 00:18:09.795 Flush: Supported 00:18:09.795 Reservation: Supported 00:18:09.795 Namespace Sharing Capabilities: Multiple Controllers 00:18:09.795 Size (in LBAs): 131072 (0GiB) 00:18:09.795 Capacity (in LBAs): 131072 (0GiB) 00:18:09.795 Utilization (in LBAs): 131072 (0GiB) 00:18:09.795 NGUID: A5B30648610649EC905036ED4AA9A34E 00:18:09.795 UUID: a5b30648-6106-49ec-9050-36ed4aa9a34e 00:18:09.795 Thin Provisioning: Not Supported 00:18:09.795 Per-NS Atomic Units: Yes 00:18:09.795 Atomic Boundary Size (Normal): 0 00:18:09.795 Atomic Boundary Size (PFail): 0 00:18:09.795 Atomic Boundary Offset: 0 00:18:09.795 Maximum Single Source Range Length: 65535 00:18:09.795 Maximum Copy Length: 65535 00:18:09.795 Maximum Source Range Count: 1 00:18:09.795 NGUID/EUI64 Never Reused: No 00:18:09.795 Namespace Write Protected: No 00:18:09.795 Number of LBA Formats: 1 00:18:09.795 Current LBA Format: LBA Format #00 00:18:09.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:09.795 00:18:09.795 11:12:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:10.072 [2024-11-17 11:12:34.519427] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:15.455 Initializing NVMe Controllers 00:18:15.455 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:15.455 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:15.455 Initialization complete. Launching workers. 00:18:15.455 ======================================================== 00:18:15.455 Latency(us) 00:18:15.455 Device Information : IOPS MiB/s Average min max 00:18:15.455 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32743.00 127.90 3910.36 1183.24 8278.40 00:18:15.455 ======================================================== 00:18:15.455 Total : 32743.00 127.90 3910.36 1183.24 8278.40 00:18:15.455 00:18:15.455 [2024-11-17 11:12:39.541561] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:15.455 11:12:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:15.455 [2024-11-17 11:12:39.794747] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.739 Initializing NVMe Controllers 00:18:20.739 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:20.739 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:20.739 Initialization complete. Launching workers. 00:18:20.739 ======================================================== 00:18:20.739 Latency(us) 00:18:20.739 Device Information : IOPS MiB/s Average min max 00:18:20.739 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15860.52 61.96 8069.59 5963.13 59386.67 00:18:20.739 ======================================================== 00:18:20.739 Total : 15860.52 61.96 8069.59 5963.13 59386.67 00:18:20.739 00:18:20.739 [2024-11-17 11:12:44.834056] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.739 11:12:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:20.739 [2024-11-17 11:12:45.064191] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.016 [2024-11-17 11:12:50.145957] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.016 Initializing NVMe Controllers 00:18:26.016 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.016 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:26.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:26.016 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:26.016 Initialization complete. Launching workers. 00:18:26.016 Starting thread on core 2 00:18:26.016 Starting thread on core 3 00:18:26.016 Starting thread on core 1 00:18:26.016 11:12:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:26.016 [2024-11-17 11:12:50.459090] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:30.214 [2024-11-17 11:12:54.351832] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:30.214 Initializing NVMe Controllers 00:18:30.214 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:30.214 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:30.214 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:30.214 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:30.214 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:30.214 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:30.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:30.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:30.214 Initialization complete. Launching workers. 00:18:30.214 Starting thread on core 1 with urgent priority queue 00:18:30.214 Starting thread on core 2 with urgent priority queue 00:18:30.214 Starting thread on core 3 with urgent priority queue 00:18:30.214 Starting thread on core 0 with urgent priority queue 00:18:30.214 SPDK bdev Controller (SPDK1 ) core 0: 2117.00 IO/s 47.24 secs/100000 ios 00:18:30.214 SPDK bdev Controller (SPDK1 ) core 1: 2308.33 IO/s 43.32 secs/100000 ios 00:18:30.214 SPDK bdev Controller (SPDK1 ) core 2: 2321.00 IO/s 43.08 secs/100000 ios 00:18:30.214 SPDK bdev Controller (SPDK1 ) core 3: 2362.33 IO/s 42.33 secs/100000 ios 00:18:30.214 ======================================================== 00:18:30.214 00:18:30.214 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:30.214 [2024-11-17 11:12:54.669076] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:30.214 Initializing NVMe Controllers 00:18:30.214 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:30.214 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:30.214 Namespace ID: 1 size: 0GB 00:18:30.214 Initialization complete. 00:18:30.214 INFO: using host memory buffer for IO 00:18:30.214 Hello world! 00:18:30.214 [2024-11-17 11:12:54.709721] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:30.214 11:12:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:30.475 [2024-11-17 11:12:55.010966] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.419 Initializing NVMe Controllers 00:18:31.419 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.419 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:31.419 Initialization complete. Launching workers. 00:18:31.419 submit (in ns) avg, min, max = 7810.9, 3512.2, 4016000.0 00:18:31.419 complete (in ns) avg, min, max = 25151.5, 2078.9, 6995767.8 00:18:31.419 00:18:31.419 Submit histogram 00:18:31.419 ================ 00:18:31.419 Range in us Cumulative Count 00:18:31.419 3.508 - 3.532: 0.0551% ( 7) 00:18:31.419 3.532 - 3.556: 0.3384% ( 36) 00:18:31.419 3.556 - 3.579: 1.0468% ( 90) 00:18:31.419 3.579 - 3.603: 3.5891% ( 323) 00:18:31.419 3.603 - 3.627: 8.2566% ( 593) 00:18:31.419 3.627 - 3.650: 16.3951% ( 1034) 00:18:31.419 3.650 - 3.674: 25.7222% ( 1185) 00:18:31.419 3.674 - 3.698: 36.3872% ( 1355) 00:18:31.419 3.698 - 3.721: 45.4782% ( 1155) 00:18:31.419 3.721 - 3.745: 52.0976% ( 841) 00:18:31.419 3.745 - 3.769: 56.0488% ( 502) 00:18:31.419 3.769 - 3.793: 59.7639% ( 472) 00:18:31.419 3.793 - 3.816: 62.9988% ( 411) 00:18:31.419 3.816 - 3.840: 66.2967% ( 419) 00:18:31.419 3.840 - 3.864: 69.8701% ( 454) 00:18:31.419 3.864 - 3.887: 73.9158% ( 514) 00:18:31.419 3.887 - 3.911: 78.1110% ( 533) 00:18:31.419 3.911 - 3.935: 82.2747% ( 529) 00:18:31.419 3.935 - 3.959: 85.1318% ( 363) 00:18:31.419 3.959 - 3.982: 87.2885% ( 274) 00:18:31.419 3.982 - 4.006: 88.9807% ( 215) 00:18:31.419 4.006 - 4.030: 90.3660% ( 176) 00:18:31.419 4.030 - 4.053: 91.3971% ( 131) 00:18:31.419 4.053 - 4.077: 92.2629% ( 110) 00:18:31.419 4.077 - 4.101: 93.0972% ( 106) 00:18:31.419 4.101 - 4.124: 93.8213% ( 92) 00:18:31.419 4.124 - 4.148: 94.5061% ( 87) 00:18:31.419 4.148 - 4.172: 95.0885% ( 74) 00:18:31.419 4.172 - 4.196: 95.4821% ( 50) 00:18:31.419 4.196 - 4.219: 95.8048% ( 41) 00:18:31.419 4.219 - 4.243: 95.9937% ( 24) 00:18:31.419 4.243 - 4.267: 96.1747% ( 23) 00:18:31.419 4.267 - 4.290: 96.3243% ( 19) 00:18:31.419 4.290 - 4.314: 96.4109% ( 11) 00:18:31.419 4.314 - 4.338: 96.5211% ( 14) 00:18:31.419 4.338 - 4.361: 96.6155% ( 12) 00:18:31.419 4.361 - 4.385: 96.6942% ( 10) 00:18:31.419 4.385 - 4.409: 96.7808% ( 11) 00:18:31.419 4.409 - 4.433: 96.8752% ( 12) 00:18:31.419 4.433 - 4.456: 96.9067% ( 4) 00:18:31.419 4.456 - 4.480: 96.9618% ( 7) 00:18:31.419 4.480 - 4.504: 96.9776% ( 2) 00:18:31.419 4.504 - 4.527: 97.0012% ( 3) 00:18:31.419 4.527 - 4.551: 97.0327% ( 4) 00:18:31.419 4.551 - 4.575: 97.0641% ( 4) 00:18:31.419 4.575 - 4.599: 97.0799% ( 2) 00:18:31.419 4.622 - 4.646: 97.1507% ( 9) 00:18:31.419 4.646 - 4.670: 97.2058% ( 7) 00:18:31.419 4.670 - 4.693: 97.2373% ( 4) 00:18:31.419 4.693 - 4.717: 97.3003% ( 8) 00:18:31.419 4.717 - 4.741: 97.3396% ( 5) 00:18:31.419 4.741 - 4.764: 97.3790% ( 5) 00:18:31.419 4.764 - 4.788: 97.4262% ( 6) 00:18:31.419 4.788 - 4.812: 97.4734% ( 6) 00:18:31.419 4.812 - 4.836: 97.5128% ( 5) 00:18:31.419 4.836 - 4.859: 97.5836% ( 9) 00:18:31.419 4.859 - 4.883: 97.6309% ( 6) 00:18:31.419 4.883 - 4.907: 97.6623% ( 4) 00:18:31.419 4.907 - 4.930: 97.6938% ( 4) 00:18:31.419 4.930 - 4.954: 97.7096% ( 2) 00:18:31.419 4.954 - 4.978: 97.7489% ( 5) 00:18:31.419 4.978 - 5.001: 97.7883% ( 5) 00:18:31.419 5.001 - 5.025: 97.8198% ( 4) 00:18:31.419 5.025 - 5.049: 97.8276% ( 1) 00:18:31.419 5.049 - 5.073: 97.8355% ( 1) 00:18:31.419 5.073 - 5.096: 97.8591% ( 3) 00:18:31.419 5.096 - 5.120: 97.8827% ( 3) 00:18:31.419 5.120 - 5.144: 97.9063% ( 3) 00:18:31.419 5.144 - 5.167: 97.9142% ( 1) 00:18:31.419 5.167 - 5.191: 97.9221% ( 1) 00:18:31.419 5.262 - 5.286: 97.9299% ( 1) 00:18:31.419 5.310 - 5.333: 97.9378% ( 1) 00:18:31.419 5.357 - 5.381: 97.9457% ( 1) 00:18:31.419 5.404 - 5.428: 97.9614% ( 2) 00:18:31.419 5.452 - 5.476: 97.9693% ( 1) 00:18:31.419 5.523 - 5.547: 97.9772% ( 1) 00:18:31.419 5.570 - 5.594: 97.9850% ( 1) 00:18:31.419 5.594 - 5.618: 97.9929% ( 1) 00:18:31.419 5.641 - 5.665: 98.0008% ( 1) 00:18:31.419 5.784 - 5.807: 98.0244% ( 3) 00:18:31.420 5.926 - 5.950: 98.0323% ( 1) 00:18:31.420 6.044 - 6.068: 98.0559% ( 3) 00:18:31.420 6.258 - 6.305: 98.0638% ( 1) 00:18:31.420 6.305 - 6.353: 98.0716% ( 1) 00:18:31.420 6.447 - 6.495: 98.0874% ( 2) 00:18:31.420 6.779 - 6.827: 98.0952% ( 1) 00:18:31.420 6.827 - 6.874: 98.1031% ( 1) 00:18:31.420 6.921 - 6.969: 98.1110% ( 1) 00:18:31.420 6.969 - 7.016: 98.1189% ( 1) 00:18:31.420 7.016 - 7.064: 98.1267% ( 1) 00:18:31.420 7.206 - 7.253: 98.1346% ( 1) 00:18:31.420 7.253 - 7.301: 98.1503% ( 2) 00:18:31.420 7.348 - 7.396: 98.1661% ( 2) 00:18:31.420 7.396 - 7.443: 98.1739% ( 1) 00:18:31.420 7.443 - 7.490: 98.1897% ( 2) 00:18:31.420 7.585 - 7.633: 98.1976% ( 1) 00:18:31.420 7.633 - 7.680: 98.2054% ( 1) 00:18:31.420 7.680 - 7.727: 98.2212% ( 2) 00:18:31.420 7.775 - 7.822: 98.2369% ( 2) 00:18:31.420 7.917 - 7.964: 98.2448% ( 1) 00:18:31.420 8.107 - 8.154: 98.2527% ( 1) 00:18:31.420 8.154 - 8.201: 98.2684% ( 2) 00:18:31.420 8.249 - 8.296: 98.2763% ( 1) 00:18:31.420 8.391 - 8.439: 98.2920% ( 2) 00:18:31.420 8.486 - 8.533: 98.3078% ( 2) 00:18:31.420 8.581 - 8.628: 98.3235% ( 2) 00:18:31.420 8.628 - 8.676: 98.3314% ( 1) 00:18:31.420 8.723 - 8.770: 98.3471% ( 2) 00:18:31.420 8.770 - 8.818: 98.3550% ( 1) 00:18:31.420 8.865 - 8.913: 98.3628% ( 1) 00:18:31.420 8.913 - 8.960: 98.3786% ( 2) 00:18:31.420 9.007 - 9.055: 98.3865% ( 1) 00:18:31.420 9.055 - 9.102: 98.3943% ( 1) 00:18:31.420 9.102 - 9.150: 98.4258% ( 4) 00:18:31.420 9.150 - 9.197: 98.4416% ( 2) 00:18:31.420 9.244 - 9.292: 98.4573% ( 2) 00:18:31.420 9.339 - 9.387: 98.4652% ( 1) 00:18:31.420 9.387 - 9.434: 98.4730% ( 1) 00:18:31.420 9.481 - 9.529: 98.4888% ( 2) 00:18:31.420 9.529 - 9.576: 98.5045% ( 2) 00:18:31.420 9.576 - 9.624: 98.5124% ( 1) 00:18:31.420 9.624 - 9.671: 98.5203% ( 1) 00:18:31.420 9.861 - 9.908: 98.5281% ( 1) 00:18:31.420 9.908 - 9.956: 98.5439% ( 2) 00:18:31.420 10.193 - 10.240: 98.5518% ( 1) 00:18:31.420 10.240 - 10.287: 98.5596% ( 1) 00:18:31.420 10.287 - 10.335: 98.5754% ( 2) 00:18:31.420 10.335 - 10.382: 98.5832% ( 1) 00:18:31.420 10.477 - 10.524: 98.5911% ( 1) 00:18:31.420 10.619 - 10.667: 98.5990% ( 1) 00:18:31.420 10.667 - 10.714: 98.6068% ( 1) 00:18:31.420 10.809 - 10.856: 98.6147% ( 1) 00:18:31.420 10.856 - 10.904: 98.6226% ( 1) 00:18:31.420 10.951 - 10.999: 98.6305% ( 1) 00:18:31.420 11.283 - 11.330: 98.6383% ( 1) 00:18:31.420 11.567 - 11.615: 98.6462% ( 1) 00:18:31.420 11.757 - 11.804: 98.6541% ( 1) 00:18:31.420 11.947 - 11.994: 98.6619% ( 1) 00:18:31.420 11.994 - 12.041: 98.6777% ( 2) 00:18:31.420 12.231 - 12.326: 98.6856% ( 1) 00:18:31.420 12.326 - 12.421: 98.6934% ( 1) 00:18:31.420 12.610 - 12.705: 98.7013% ( 1) 00:18:31.420 12.800 - 12.895: 98.7092% ( 1) 00:18:31.420 12.895 - 12.990: 98.7170% ( 1) 00:18:31.420 13.179 - 13.274: 98.7328% ( 2) 00:18:31.420 13.843 - 13.938: 98.7407% ( 1) 00:18:31.420 13.938 - 14.033: 98.7485% ( 1) 00:18:31.420 14.222 - 14.317: 98.7564% ( 1) 00:18:31.420 14.507 - 14.601: 98.7721% ( 2) 00:18:31.420 15.170 - 15.265: 98.7800% ( 1) 00:18:31.420 15.644 - 15.739: 98.7879% ( 1) 00:18:31.420 16.024 - 16.119: 98.7957% ( 1) 00:18:31.420 16.403 - 16.498: 98.8036% ( 1) 00:18:31.420 17.067 - 17.161: 98.8194% ( 2) 00:18:31.420 17.161 - 17.256: 98.8430% ( 3) 00:18:31.420 17.256 - 17.351: 98.8587% ( 2) 00:18:31.420 17.351 - 17.446: 98.8745% ( 2) 00:18:31.420 17.541 - 17.636: 98.9059% ( 4) 00:18:31.420 17.636 - 17.730: 98.9768% ( 9) 00:18:31.420 17.730 - 17.825: 99.0083% ( 4) 00:18:31.420 17.825 - 17.920: 99.0634% ( 7) 00:18:31.420 17.920 - 18.015: 99.1106% ( 6) 00:18:31.420 18.015 - 18.110: 99.1578% ( 6) 00:18:31.420 18.110 - 18.204: 99.2680% ( 14) 00:18:31.420 18.204 - 18.299: 99.3388% ( 9) 00:18:31.420 18.299 - 18.394: 99.4254% ( 11) 00:18:31.420 18.394 - 18.489: 99.4963% ( 9) 00:18:31.420 18.489 - 18.584: 99.5592% ( 8) 00:18:31.420 18.584 - 18.679: 99.6065% ( 6) 00:18:31.420 18.679 - 18.773: 99.6537% ( 6) 00:18:31.420 18.773 - 18.868: 99.7009% ( 6) 00:18:31.420 18.868 - 18.963: 99.7088% ( 1) 00:18:31.420 18.963 - 19.058: 99.7560% ( 6) 00:18:31.420 19.058 - 19.153: 99.7717% ( 2) 00:18:31.420 19.153 - 19.247: 99.7796% ( 1) 00:18:31.420 19.342 - 19.437: 99.7875% ( 1) 00:18:31.420 19.437 - 19.532: 99.8032% ( 2) 00:18:31.420 19.627 - 19.721: 99.8190% ( 2) 00:18:31.420 19.911 - 20.006: 99.8268% ( 1) 00:18:31.420 20.290 - 20.385: 99.8347% ( 1) 00:18:31.420 22.187 - 22.281: 99.8426% ( 1) 00:18:31.420 22.281 - 22.376: 99.8505% ( 1) 00:18:31.420 25.031 - 25.221: 99.8583% ( 1) 00:18:31.420 25.221 - 25.410: 99.8662% ( 1) 00:18:31.420 26.738 - 26.927: 99.8741% ( 1) 00:18:31.420 28.634 - 28.824: 99.8898% ( 2) 00:18:31.420 34.513 - 34.702: 99.8977% ( 1) 00:18:31.420 388.361 - 391.396: 99.9055% ( 1) 00:18:31.420 3980.705 - 4004.978: 99.9843% ( 10) 00:18:31.420 4004.978 - 4029.250: 100.0000% ( 2) 00:18:31.420 00:18:31.420 Complete histogram 00:18:31.420 ================== 00:18:31.420 Range in us Cumulative Count 00:18:31.420 2.074 - 2.086: 2.1645% ( 275) 00:18:31.420 2.086 - 2.098: 33.8686% ( 4028) 00:18:31.420 2.098 - 2.110: 46.1551% ( 1561) 00:18:31.420 2.110 - 2.121: 50.3109% ( 528) 00:18:31.420 2.121 - 2.133: 59.4254% ( 1158) 00:18:31.420 2.133 - 2.145: 61.6608% ( 284) 00:18:31.420 2.145 - 2.157: 65.8638% ( 534) 00:18:31.420 2.157 - 2.169: 76.0016% ( 1288) 00:18:31.420 2.169 - 2.181: 78.1897% ( 278) 00:18:31.420 2.181 - 2.193: 79.8662% ( 213) 00:18:31.420 2.193 - 2.204: 82.2747% ( 306) 00:18:31.420 2.204 - 2.216: 82.7470% ( 60) 00:18:31.420 2.216 - 2.228: 84.1322% ( 176) 00:18:31.420 2.228 - 2.240: 87.6820% ( 451) 00:18:31.420 2.240 - 2.252: 90.1063% ( 308) 00:18:31.420 2.252 - 2.264: 92.0346% ( 245) 00:18:31.420 2.264 - 2.276: 93.0972% ( 135) 00:18:31.420 2.276 - 2.287: 93.5222% ( 54) 00:18:31.420 2.287 - 2.299: 93.7584% ( 30) 00:18:31.420 2.299 - 2.311: 94.1362% ( 48) 00:18:31.420 2.311 - 2.323: 94.7501% ( 78) 00:18:31.420 2.323 - 2.335: 95.2932% ( 69) 00:18:31.420 2.335 - 2.347: 95.4270% ( 17) 00:18:31.420 2.347 - 2.359: 95.4978% ( 9) 00:18:31.420 2.359 - 2.370: 95.5293% ( 4) 00:18:31.420 2.370 - 2.382: 95.5923% ( 8) 00:18:31.420 2.382 - 2.394: 95.7025% ( 14) 00:18:31.420 2.394 - 2.406: 96.0960% ( 50) 00:18:31.420 2.406 - 2.418: 96.3872% ( 37) 00:18:31.420 2.418 - 2.430: 96.5840% ( 25) 00:18:31.420 2.430 - 2.441: 96.7257% ( 18) 00:18:31.420 2.441 - 2.453: 96.8910% ( 21) 00:18:31.420 2.453 - 2.465: 97.0720% ( 23) 00:18:31.420 2.465 - 2.477: 97.3318% ( 33) 00:18:31.420 2.477 - 2.489: 97.5600% ( 29) 00:18:31.420 2.489 - 2.501: 97.6938% ( 17) 00:18:31.420 2.501 - 2.513: 97.8434% ( 19) 00:18:31.420 2.513 - 2.524: 97.9772% ( 17) 00:18:31.420 2.524 - 2.536: 98.0401% ( 8) 00:18:31.420 2.536 - 2.548: 98.1110% ( 9) 00:18:31.420 2.548 - 2.560: 98.1661% ( 7) 00:18:31.420 2.560 - 2.572: 98.1976% ( 4) 00:18:31.420 2.572 - 2.584: 98.2369% ( 5) 00:18:31.420 2.584 - 2.596: 98.2684% ( 4) 00:18:31.420 2.596 - 2.607: 98.2841% ( 2) 00:18:31.421 2.607 - 2.619: 98.2999% ( 2) 00:18:31.421 2.619 - 2.631: 98.3078% ( 1) 00:18:31.421 2.655 - 2.667: 98.3156% ( 1) 00:18:31.421 2.667 - 2.679: 98.3235% ( 1) 00:18:31.421 2.679 - 2.690: 98.3314% ( 1) 00:18:31.421 2.690 - 2.702: 98.3392% ( 1) 00:18:31.421 2.702 - 2.714: 98.3471% ( 1) 00:18:31.421 2.821 - 2.833: 98.3628% ( 2) 00:18:31.421 2.856 - 2.868: 98.3786% ( 2) 00:18:31.421 3.176 - 3.200: 98.3865% ( 1) 00:18:31.421 3.390 - 3.413: 98.3943% ( 1) 00:18:31.421 3.508 - 3.532: 98.4022% ( 1) 00:18:31.421 3.556 - 3.579: 98.4179% ( 2) 00:18:31.421 3.579 - 3.603: 98.4258% ( 1) 00:18:31.421 3.603 - 3.627: 98.4416% ( 2) 00:18:31.421 3.627 - 3.650: 98.4494% ( 1) 00:18:31.421 3.650 - 3.674: 9[2024-11-17 11:12:56.034324] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:31.680 8.4652% ( 2) 00:18:31.680 3.674 - 3.698: 98.4730% ( 1) 00:18:31.680 3.745 - 3.769: 98.4809% ( 1) 00:18:31.680 3.769 - 3.793: 98.4888% ( 1) 00:18:31.680 3.793 - 3.816: 98.5045% ( 2) 00:18:31.680 3.816 - 3.840: 98.5124% ( 1) 00:18:31.680 3.840 - 3.864: 98.5281% ( 2) 00:18:31.680 3.864 - 3.887: 98.5439% ( 2) 00:18:31.680 3.887 - 3.911: 98.5596% ( 2) 00:18:31.680 3.911 - 3.935: 98.5754% ( 2) 00:18:31.680 3.935 - 3.959: 98.5832% ( 1) 00:18:31.680 3.959 - 3.982: 98.5911% ( 1) 00:18:31.680 4.030 - 4.053: 98.5990% ( 1) 00:18:31.680 4.077 - 4.101: 98.6068% ( 1) 00:18:31.680 4.101 - 4.124: 98.6147% ( 1) 00:18:31.680 4.124 - 4.148: 98.6305% ( 2) 00:18:31.680 4.196 - 4.219: 98.6462% ( 2) 00:18:31.680 4.219 - 4.243: 98.6541% ( 1) 00:18:31.680 4.243 - 4.267: 98.6619% ( 1) 00:18:31.680 5.689 - 5.713: 98.6698% ( 1) 00:18:31.680 5.736 - 5.760: 98.6777% ( 1) 00:18:31.680 5.973 - 5.997: 98.6856% ( 1) 00:18:31.680 6.305 - 6.353: 98.6934% ( 1) 00:18:31.680 6.353 - 6.400: 98.7092% ( 2) 00:18:31.680 6.637 - 6.684: 98.7249% ( 2) 00:18:31.680 6.779 - 6.827: 98.7328% ( 1) 00:18:31.680 6.874 - 6.921: 98.7407% ( 1) 00:18:31.680 7.111 - 7.159: 98.7485% ( 1) 00:18:31.680 7.206 - 7.253: 98.7564% ( 1) 00:18:31.680 7.538 - 7.585: 98.7643% ( 1) 00:18:31.680 7.585 - 7.633: 98.7721% ( 1) 00:18:31.680 7.680 - 7.727: 98.7800% ( 1) 00:18:31.680 7.917 - 7.964: 98.7879% ( 1) 00:18:31.680 8.581 - 8.628: 98.7957% ( 1) 00:18:31.680 8.723 - 8.770: 98.8036% ( 1) 00:18:31.680 9.434 - 9.481: 98.8115% ( 1) 00:18:31.680 9.576 - 9.624: 98.8194% ( 1) 00:18:31.680 10.050 - 10.098: 98.8272% ( 1) 00:18:31.680 10.714 - 10.761: 98.8351% ( 1) 00:18:31.680 11.615 - 11.662: 98.8430% ( 1) 00:18:31.680 15.455 - 15.550: 98.8508% ( 1) 00:18:31.680 15.739 - 15.834: 98.8745% ( 3) 00:18:31.680 15.834 - 15.929: 98.8823% ( 1) 00:18:31.680 15.929 - 16.024: 98.9059% ( 3) 00:18:31.680 16.024 - 16.119: 98.9374% ( 4) 00:18:31.680 16.119 - 16.213: 98.9610% ( 3) 00:18:31.680 16.213 - 16.308: 98.9768% ( 2) 00:18:31.680 16.308 - 16.403: 99.0161% ( 5) 00:18:31.680 16.403 - 16.498: 99.0555% ( 5) 00:18:31.680 16.498 - 16.593: 99.1342% ( 10) 00:18:31.680 16.593 - 16.687: 99.1657% ( 4) 00:18:31.680 16.687 - 16.782: 99.2287% ( 8) 00:18:31.680 16.782 - 16.877: 99.2759% ( 6) 00:18:31.680 16.877 - 16.972: 99.2995% ( 3) 00:18:31.680 16.972 - 17.067: 99.3152% ( 2) 00:18:31.680 17.067 - 17.161: 99.3310% ( 2) 00:18:31.680 17.161 - 17.256: 99.3546% ( 3) 00:18:31.680 17.256 - 17.351: 99.3625% ( 1) 00:18:31.680 17.636 - 17.730: 99.3782% ( 2) 00:18:31.680 17.730 - 17.825: 99.3861% ( 1) 00:18:31.680 17.825 - 17.920: 99.3939% ( 1) 00:18:31.680 18.015 - 18.110: 99.4018% ( 1) 00:18:31.680 18.110 - 18.204: 99.4097% ( 1) 00:18:31.680 18.204 - 18.299: 99.4176% ( 1) 00:18:31.680 18.394 - 18.489: 99.4254% ( 1) 00:18:31.680 27.686 - 27.876: 99.4333% ( 1) 00:18:31.680 3980.705 - 4004.978: 99.9134% ( 61) 00:18:31.680 4004.978 - 4029.250: 99.9843% ( 9) 00:18:31.680 4029.250 - 4053.523: 99.9921% ( 1) 00:18:31.680 6990.507 - 7039.052: 100.0000% ( 1) 00:18:31.680 00:18:31.680 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:31.680 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:31.680 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:31.680 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:31.680 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:31.939 [ 00:18:31.939 { 00:18:31.939 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:31.939 "subtype": "Discovery", 00:18:31.939 "listen_addresses": [], 00:18:31.939 "allow_any_host": true, 00:18:31.939 "hosts": [] 00:18:31.939 }, 00:18:31.939 { 00:18:31.939 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:31.939 "subtype": "NVMe", 00:18:31.939 "listen_addresses": [ 00:18:31.939 { 00:18:31.939 "trtype": "VFIOUSER", 00:18:31.939 "adrfam": "IPv4", 00:18:31.939 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:31.939 "trsvcid": "0" 00:18:31.939 } 00:18:31.939 ], 00:18:31.939 "allow_any_host": true, 00:18:31.939 "hosts": [], 00:18:31.939 "serial_number": "SPDK1", 00:18:31.939 "model_number": "SPDK bdev Controller", 00:18:31.939 "max_namespaces": 32, 00:18:31.939 "min_cntlid": 1, 00:18:31.939 "max_cntlid": 65519, 00:18:31.939 "namespaces": [ 00:18:31.939 { 00:18:31.939 "nsid": 1, 00:18:31.939 "bdev_name": "Malloc1", 00:18:31.939 "name": "Malloc1", 00:18:31.939 "nguid": "A5B30648610649EC905036ED4AA9A34E", 00:18:31.939 "uuid": "a5b30648-6106-49ec-9050-36ed4aa9a34e" 00:18:31.939 } 00:18:31.939 ] 00:18:31.939 }, 00:18:31.939 { 00:18:31.939 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:31.939 "subtype": "NVMe", 00:18:31.939 "listen_addresses": [ 00:18:31.939 { 00:18:31.939 "trtype": "VFIOUSER", 00:18:31.939 "adrfam": "IPv4", 00:18:31.939 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:31.939 "trsvcid": "0" 00:18:31.939 } 00:18:31.939 ], 00:18:31.939 "allow_any_host": true, 00:18:31.939 "hosts": [], 00:18:31.939 "serial_number": "SPDK2", 00:18:31.939 "model_number": "SPDK bdev Controller", 00:18:31.939 "max_namespaces": 32, 00:18:31.939 "min_cntlid": 1, 00:18:31.939 "max_cntlid": 65519, 00:18:31.939 "namespaces": [ 00:18:31.939 { 00:18:31.939 "nsid": 1, 00:18:31.939 "bdev_name": "Malloc2", 00:18:31.939 "name": "Malloc2", 00:18:31.939 "nguid": "B6BC5C988B5C43B59956A039D40751EE", 00:18:31.939 "uuid": "b6bc5c98-8b5c-43b5-9956-a039d40751ee" 00:18:31.939 } 00:18:31.939 ] 00:18:31.939 } 00:18:31.939 ] 00:18:31.939 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:31.939 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=230343 00:18:31.939 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:31.939 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:31.939 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:31.940 [2024-11-17 11:12:56.532104] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:31.940 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:32.509 Malloc3 00:18:32.509 11:12:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:32.509 [2024-11-17 11:12:57.126434] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:32.509 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:32.770 Asynchronous Event Request test 00:18:32.770 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.770 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:32.770 Registering asynchronous event callbacks... 00:18:32.770 Starting namespace attribute notice tests for all controllers... 00:18:32.770 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:32.770 aer_cb - Changed Namespace 00:18:32.770 Cleaning up... 00:18:32.770 [ 00:18:32.770 { 00:18:32.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:32.770 "subtype": "Discovery", 00:18:32.770 "listen_addresses": [], 00:18:32.770 "allow_any_host": true, 00:18:32.770 "hosts": [] 00:18:32.770 }, 00:18:32.770 { 00:18:32.770 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:32.770 "subtype": "NVMe", 00:18:32.770 "listen_addresses": [ 00:18:32.770 { 00:18:32.770 "trtype": "VFIOUSER", 00:18:32.770 "adrfam": "IPv4", 00:18:32.770 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:32.770 "trsvcid": "0" 00:18:32.770 } 00:18:32.770 ], 00:18:32.770 "allow_any_host": true, 00:18:32.770 "hosts": [], 00:18:32.770 "serial_number": "SPDK1", 00:18:32.770 "model_number": "SPDK bdev Controller", 00:18:32.770 "max_namespaces": 32, 00:18:32.770 "min_cntlid": 1, 00:18:32.770 "max_cntlid": 65519, 00:18:32.770 "namespaces": [ 00:18:32.770 { 00:18:32.770 "nsid": 1, 00:18:32.770 "bdev_name": "Malloc1", 00:18:32.770 "name": "Malloc1", 00:18:32.770 "nguid": "A5B30648610649EC905036ED4AA9A34E", 00:18:32.770 "uuid": "a5b30648-6106-49ec-9050-36ed4aa9a34e" 00:18:32.770 }, 00:18:32.770 { 00:18:32.770 "nsid": 2, 00:18:32.770 "bdev_name": "Malloc3", 00:18:32.770 "name": "Malloc3", 00:18:32.770 "nguid": "017BAF632DDE4A46950FF67DF788C695", 00:18:32.770 "uuid": "017baf63-2dde-4a46-950f-f67df788c695" 00:18:32.770 } 00:18:32.770 ] 00:18:32.770 }, 00:18:32.770 { 00:18:32.770 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:32.770 "subtype": "NVMe", 00:18:32.770 "listen_addresses": [ 00:18:32.770 { 00:18:32.770 "trtype": "VFIOUSER", 00:18:32.770 "adrfam": "IPv4", 00:18:32.770 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:32.770 "trsvcid": "0" 00:18:32.770 } 00:18:32.770 ], 00:18:32.770 "allow_any_host": true, 00:18:32.770 "hosts": [], 00:18:32.770 "serial_number": "SPDK2", 00:18:32.770 "model_number": "SPDK bdev Controller", 00:18:32.770 "max_namespaces": 32, 00:18:32.770 "min_cntlid": 1, 00:18:32.770 "max_cntlid": 65519, 00:18:32.770 "namespaces": [ 00:18:32.770 { 00:18:32.770 "nsid": 1, 00:18:32.770 "bdev_name": "Malloc2", 00:18:32.770 "name": "Malloc2", 00:18:32.770 "nguid": "B6BC5C988B5C43B59956A039D40751EE", 00:18:32.770 "uuid": "b6bc5c98-8b5c-43b5-9956-a039d40751ee" 00:18:32.770 } 00:18:32.770 ] 00:18:32.770 } 00:18:32.770 ] 00:18:33.033 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 230343 00:18:33.033 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:33.033 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:33.033 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:33.033 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:33.033 [2024-11-17 11:12:57.448091] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:33.034 [2024-11-17 11:12:57.448132] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid230478 ] 00:18:33.034 [2024-11-17 11:12:57.499344] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:33.034 [2024-11-17 11:12:57.504837] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:33.034 [2024-11-17 11:12:57.504882] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f19a171c000 00:18:33.034 [2024-11-17 11:12:57.505839] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.506840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.507849] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.508855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.509867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.510876] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.511881] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.512889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:33.034 [2024-11-17 11:12:57.513892] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:33.034 [2024-11-17 11:12:57.513913] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f19a0414000 00:18:33.034 [2024-11-17 11:12:57.515028] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:33.034 [2024-11-17 11:12:57.529265] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:33.034 [2024-11-17 11:12:57.529308] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:33.034 [2024-11-17 11:12:57.534412] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:33.034 [2024-11-17 11:12:57.534471] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:33.034 [2024-11-17 11:12:57.534611] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:33.034 [2024-11-17 11:12:57.534638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:33.034 [2024-11-17 11:12:57.534649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:33.034 [2024-11-17 11:12:57.535420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:33.034 [2024-11-17 11:12:57.535441] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:33.034 [2024-11-17 11:12:57.535453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:33.034 [2024-11-17 11:12:57.536431] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:33.034 [2024-11-17 11:12:57.536453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:33.034 [2024-11-17 11:12:57.536467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:33.034 [2024-11-17 11:12:57.537440] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:33.034 [2024-11-17 11:12:57.537461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:33.034 [2024-11-17 11:12:57.538453] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:33.034 [2024-11-17 11:12:57.538475] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:33.034 [2024-11-17 11:12:57.538483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:33.034 [2024-11-17 11:12:57.538495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:33.034 [2024-11-17 11:12:57.538620] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:33.034 [2024-11-17 11:12:57.538630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:33.034 [2024-11-17 11:12:57.538639] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:33.034 [2024-11-17 11:12:57.539457] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:33.034 [2024-11-17 11:12:57.540463] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:33.034 [2024-11-17 11:12:57.541473] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:33.034 [2024-11-17 11:12:57.542467] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:33.034 [2024-11-17 11:12:57.542541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:33.034 [2024-11-17 11:12:57.544538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:33.034 [2024-11-17 11:12:57.544560] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:33.034 [2024-11-17 11:12:57.544570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.544595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:33.034 [2024-11-17 11:12:57.544609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.544630] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:33.034 [2024-11-17 11:12:57.544640] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.034 [2024-11-17 11:12:57.544647] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.034 [2024-11-17 11:12:57.544667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.034 [2024-11-17 11:12:57.552544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:33.034 [2024-11-17 11:12:57.552567] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:33.034 [2024-11-17 11:12:57.552581] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:33.034 [2024-11-17 11:12:57.552589] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:33.034 [2024-11-17 11:12:57.552599] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:33.034 [2024-11-17 11:12:57.552611] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:33.034 [2024-11-17 11:12:57.552621] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:33.034 [2024-11-17 11:12:57.552629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.552646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.552663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:33.034 [2024-11-17 11:12:57.560540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:33.034 [2024-11-17 11:12:57.560564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.034 [2024-11-17 11:12:57.560577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.034 [2024-11-17 11:12:57.560589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.034 [2024-11-17 11:12:57.560601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.034 [2024-11-17 11:12:57.560609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.560622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.560636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:33.034 [2024-11-17 11:12:57.568535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:33.034 [2024-11-17 11:12:57.568559] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:33.034 [2024-11-17 11:12:57.568571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.568584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.568594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:33.034 [2024-11-17 11:12:57.568608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:33.034 [2024-11-17 11:12:57.576533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:33.034 [2024-11-17 11:12:57.576612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.576634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.576647] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:33.035 [2024-11-17 11:12:57.576656] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:33.035 [2024-11-17 11:12:57.576662] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.035 [2024-11-17 11:12:57.576671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.584535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.584558] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:33.035 [2024-11-17 11:12:57.584578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.584595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.584607] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:33.035 [2024-11-17 11:12:57.584615] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.035 [2024-11-17 11:12:57.584621] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.035 [2024-11-17 11:12:57.584631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.592533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.592562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.592580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.592593] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:33.035 [2024-11-17 11:12:57.592602] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.035 [2024-11-17 11:12:57.592608] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.035 [2024-11-17 11:12:57.592617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.600552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.600585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600657] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:33.035 [2024-11-17 11:12:57.600664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:33.035 [2024-11-17 11:12:57.600673] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:33.035 [2024-11-17 11:12:57.600699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.608538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.608564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.616535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.616560] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.624533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.624558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.632535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.632566] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:33.035 [2024-11-17 11:12:57.632577] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:33.035 [2024-11-17 11:12:57.632583] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:33.035 [2024-11-17 11:12:57.632589] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:33.035 [2024-11-17 11:12:57.632595] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:33.035 [2024-11-17 11:12:57.632604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:33.035 [2024-11-17 11:12:57.632616] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:33.035 [2024-11-17 11:12:57.632623] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:33.035 [2024-11-17 11:12:57.632629] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.035 [2024-11-17 11:12:57.632638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.632649] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:33.035 [2024-11-17 11:12:57.632657] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:33.035 [2024-11-17 11:12:57.632662] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.035 [2024-11-17 11:12:57.632671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.632683] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:33.035 [2024-11-17 11:12:57.632691] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:33.035 [2024-11-17 11:12:57.632700] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:33.035 [2024-11-17 11:12:57.632709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:33.035 [2024-11-17 11:12:57.640534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.640561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.640579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:33.035 [2024-11-17 11:12:57.640592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:33.035 ===================================================== 00:18:33.035 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:33.035 ===================================================== 00:18:33.035 Controller Capabilities/Features 00:18:33.035 ================================ 00:18:33.035 Vendor ID: 4e58 00:18:33.035 Subsystem Vendor ID: 4e58 00:18:33.035 Serial Number: SPDK2 00:18:33.035 Model Number: SPDK bdev Controller 00:18:33.035 Firmware Version: 25.01 00:18:33.035 Recommended Arb Burst: 6 00:18:33.035 IEEE OUI Identifier: 8d 6b 50 00:18:33.035 Multi-path I/O 00:18:33.035 May have multiple subsystem ports: Yes 00:18:33.035 May have multiple controllers: Yes 00:18:33.035 Associated with SR-IOV VF: No 00:18:33.035 Max Data Transfer Size: 131072 00:18:33.035 Max Number of Namespaces: 32 00:18:33.035 Max Number of I/O Queues: 127 00:18:33.035 NVMe Specification Version (VS): 1.3 00:18:33.035 NVMe Specification Version (Identify): 1.3 00:18:33.035 Maximum Queue Entries: 256 00:18:33.035 Contiguous Queues Required: Yes 00:18:33.035 Arbitration Mechanisms Supported 00:18:33.035 Weighted Round Robin: Not Supported 00:18:33.035 Vendor Specific: Not Supported 00:18:33.035 Reset Timeout: 15000 ms 00:18:33.035 Doorbell Stride: 4 bytes 00:18:33.035 NVM Subsystem Reset: Not Supported 00:18:33.035 Command Sets Supported 00:18:33.035 NVM Command Set: Supported 00:18:33.035 Boot Partition: Not Supported 00:18:33.035 Memory Page Size Minimum: 4096 bytes 00:18:33.035 Memory Page Size Maximum: 4096 bytes 00:18:33.035 Persistent Memory Region: Not Supported 00:18:33.035 Optional Asynchronous Events Supported 00:18:33.035 Namespace Attribute Notices: Supported 00:18:33.035 Firmware Activation Notices: Not Supported 00:18:33.035 ANA Change Notices: Not Supported 00:18:33.035 PLE Aggregate Log Change Notices: Not Supported 00:18:33.035 LBA Status Info Alert Notices: Not Supported 00:18:33.035 EGE Aggregate Log Change Notices: Not Supported 00:18:33.035 Normal NVM Subsystem Shutdown event: Not Supported 00:18:33.035 Zone Descriptor Change Notices: Not Supported 00:18:33.035 Discovery Log Change Notices: Not Supported 00:18:33.035 Controller Attributes 00:18:33.035 128-bit Host Identifier: Supported 00:18:33.035 Non-Operational Permissive Mode: Not Supported 00:18:33.035 NVM Sets: Not Supported 00:18:33.035 Read Recovery Levels: Not Supported 00:18:33.035 Endurance Groups: Not Supported 00:18:33.035 Predictable Latency Mode: Not Supported 00:18:33.036 Traffic Based Keep ALive: Not Supported 00:18:33.036 Namespace Granularity: Not Supported 00:18:33.036 SQ Associations: Not Supported 00:18:33.036 UUID List: Not Supported 00:18:33.036 Multi-Domain Subsystem: Not Supported 00:18:33.036 Fixed Capacity Management: Not Supported 00:18:33.036 Variable Capacity Management: Not Supported 00:18:33.036 Delete Endurance Group: Not Supported 00:18:33.036 Delete NVM Set: Not Supported 00:18:33.036 Extended LBA Formats Supported: Not Supported 00:18:33.036 Flexible Data Placement Supported: Not Supported 00:18:33.036 00:18:33.036 Controller Memory Buffer Support 00:18:33.036 ================================ 00:18:33.036 Supported: No 00:18:33.036 00:18:33.036 Persistent Memory Region Support 00:18:33.036 ================================ 00:18:33.036 Supported: No 00:18:33.036 00:18:33.036 Admin Command Set Attributes 00:18:33.036 ============================ 00:18:33.036 Security Send/Receive: Not Supported 00:18:33.036 Format NVM: Not Supported 00:18:33.036 Firmware Activate/Download: Not Supported 00:18:33.036 Namespace Management: Not Supported 00:18:33.036 Device Self-Test: Not Supported 00:18:33.036 Directives: Not Supported 00:18:33.036 NVMe-MI: Not Supported 00:18:33.036 Virtualization Management: Not Supported 00:18:33.036 Doorbell Buffer Config: Not Supported 00:18:33.036 Get LBA Status Capability: Not Supported 00:18:33.036 Command & Feature Lockdown Capability: Not Supported 00:18:33.036 Abort Command Limit: 4 00:18:33.036 Async Event Request Limit: 4 00:18:33.036 Number of Firmware Slots: N/A 00:18:33.036 Firmware Slot 1 Read-Only: N/A 00:18:33.036 Firmware Activation Without Reset: N/A 00:18:33.036 Multiple Update Detection Support: N/A 00:18:33.036 Firmware Update Granularity: No Information Provided 00:18:33.036 Per-Namespace SMART Log: No 00:18:33.036 Asymmetric Namespace Access Log Page: Not Supported 00:18:33.036 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:33.036 Command Effects Log Page: Supported 00:18:33.036 Get Log Page Extended Data: Supported 00:18:33.036 Telemetry Log Pages: Not Supported 00:18:33.036 Persistent Event Log Pages: Not Supported 00:18:33.036 Supported Log Pages Log Page: May Support 00:18:33.036 Commands Supported & Effects Log Page: Not Supported 00:18:33.036 Feature Identifiers & Effects Log Page:May Support 00:18:33.036 NVMe-MI Commands & Effects Log Page: May Support 00:18:33.036 Data Area 4 for Telemetry Log: Not Supported 00:18:33.036 Error Log Page Entries Supported: 128 00:18:33.036 Keep Alive: Supported 00:18:33.036 Keep Alive Granularity: 10000 ms 00:18:33.036 00:18:33.036 NVM Command Set Attributes 00:18:33.036 ========================== 00:18:33.036 Submission Queue Entry Size 00:18:33.036 Max: 64 00:18:33.036 Min: 64 00:18:33.036 Completion Queue Entry Size 00:18:33.036 Max: 16 00:18:33.036 Min: 16 00:18:33.036 Number of Namespaces: 32 00:18:33.036 Compare Command: Supported 00:18:33.036 Write Uncorrectable Command: Not Supported 00:18:33.036 Dataset Management Command: Supported 00:18:33.036 Write Zeroes Command: Supported 00:18:33.036 Set Features Save Field: Not Supported 00:18:33.036 Reservations: Not Supported 00:18:33.036 Timestamp: Not Supported 00:18:33.036 Copy: Supported 00:18:33.036 Volatile Write Cache: Present 00:18:33.036 Atomic Write Unit (Normal): 1 00:18:33.036 Atomic Write Unit (PFail): 1 00:18:33.036 Atomic Compare & Write Unit: 1 00:18:33.036 Fused Compare & Write: Supported 00:18:33.036 Scatter-Gather List 00:18:33.036 SGL Command Set: Supported (Dword aligned) 00:18:33.036 SGL Keyed: Not Supported 00:18:33.036 SGL Bit Bucket Descriptor: Not Supported 00:18:33.036 SGL Metadata Pointer: Not Supported 00:18:33.036 Oversized SGL: Not Supported 00:18:33.036 SGL Metadata Address: Not Supported 00:18:33.036 SGL Offset: Not Supported 00:18:33.036 Transport SGL Data Block: Not Supported 00:18:33.036 Replay Protected Memory Block: Not Supported 00:18:33.036 00:18:33.036 Firmware Slot Information 00:18:33.036 ========================= 00:18:33.036 Active slot: 1 00:18:33.036 Slot 1 Firmware Revision: 25.01 00:18:33.036 00:18:33.036 00:18:33.036 Commands Supported and Effects 00:18:33.036 ============================== 00:18:33.036 Admin Commands 00:18:33.036 -------------- 00:18:33.036 Get Log Page (02h): Supported 00:18:33.036 Identify (06h): Supported 00:18:33.036 Abort (08h): Supported 00:18:33.036 Set Features (09h): Supported 00:18:33.036 Get Features (0Ah): Supported 00:18:33.036 Asynchronous Event Request (0Ch): Supported 00:18:33.036 Keep Alive (18h): Supported 00:18:33.036 I/O Commands 00:18:33.036 ------------ 00:18:33.036 Flush (00h): Supported LBA-Change 00:18:33.036 Write (01h): Supported LBA-Change 00:18:33.036 Read (02h): Supported 00:18:33.036 Compare (05h): Supported 00:18:33.036 Write Zeroes (08h): Supported LBA-Change 00:18:33.036 Dataset Management (09h): Supported LBA-Change 00:18:33.036 Copy (19h): Supported LBA-Change 00:18:33.036 00:18:33.036 Error Log 00:18:33.036 ========= 00:18:33.036 00:18:33.036 Arbitration 00:18:33.036 =========== 00:18:33.036 Arbitration Burst: 1 00:18:33.036 00:18:33.036 Power Management 00:18:33.036 ================ 00:18:33.036 Number of Power States: 1 00:18:33.036 Current Power State: Power State #0 00:18:33.036 Power State #0: 00:18:33.036 Max Power: 0.00 W 00:18:33.036 Non-Operational State: Operational 00:18:33.036 Entry Latency: Not Reported 00:18:33.036 Exit Latency: Not Reported 00:18:33.036 Relative Read Throughput: 0 00:18:33.036 Relative Read Latency: 0 00:18:33.036 Relative Write Throughput: 0 00:18:33.036 Relative Write Latency: 0 00:18:33.036 Idle Power: Not Reported 00:18:33.036 Active Power: Not Reported 00:18:33.036 Non-Operational Permissive Mode: Not Supported 00:18:33.036 00:18:33.036 Health Information 00:18:33.036 ================== 00:18:33.036 Critical Warnings: 00:18:33.036 Available Spare Space: OK 00:18:33.036 Temperature: OK 00:18:33.036 Device Reliability: OK 00:18:33.036 Read Only: No 00:18:33.036 Volatile Memory Backup: OK 00:18:33.036 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:33.036 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:33.036 Available Spare: 0% 00:18:33.036 Available Sp[2024-11-17 11:12:57.640722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:33.036 [2024-11-17 11:12:57.648536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:33.036 [2024-11-17 11:12:57.648588] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:33.036 [2024-11-17 11:12:57.648606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.036 [2024-11-17 11:12:57.648617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.036 [2024-11-17 11:12:57.648627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.036 [2024-11-17 11:12:57.648636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.036 [2024-11-17 11:12:57.648721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:33.036 [2024-11-17 11:12:57.648742] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:33.036 [2024-11-17 11:12:57.649721] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:33.036 [2024-11-17 11:12:57.649791] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:33.036 [2024-11-17 11:12:57.649805] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:33.036 [2024-11-17 11:12:57.650731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:33.036 [2024-11-17 11:12:57.650755] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:33.036 [2024-11-17 11:12:57.650807] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:33.036 [2024-11-17 11:12:57.653536] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:33.296 are Threshold: 0% 00:18:33.296 Life Percentage Used: 0% 00:18:33.296 Data Units Read: 0 00:18:33.296 Data Units Written: 0 00:18:33.296 Host Read Commands: 0 00:18:33.296 Host Write Commands: 0 00:18:33.296 Controller Busy Time: 0 minutes 00:18:33.296 Power Cycles: 0 00:18:33.296 Power On Hours: 0 hours 00:18:33.296 Unsafe Shutdowns: 0 00:18:33.296 Unrecoverable Media Errors: 0 00:18:33.296 Lifetime Error Log Entries: 0 00:18:33.296 Warning Temperature Time: 0 minutes 00:18:33.296 Critical Temperature Time: 0 minutes 00:18:33.296 00:18:33.296 Number of Queues 00:18:33.296 ================ 00:18:33.296 Number of I/O Submission Queues: 127 00:18:33.296 Number of I/O Completion Queues: 127 00:18:33.296 00:18:33.296 Active Namespaces 00:18:33.296 ================= 00:18:33.296 Namespace ID:1 00:18:33.296 Error Recovery Timeout: Unlimited 00:18:33.296 Command Set Identifier: NVM (00h) 00:18:33.296 Deallocate: Supported 00:18:33.296 Deallocated/Unwritten Error: Not Supported 00:18:33.296 Deallocated Read Value: Unknown 00:18:33.296 Deallocate in Write Zeroes: Not Supported 00:18:33.296 Deallocated Guard Field: 0xFFFF 00:18:33.296 Flush: Supported 00:18:33.296 Reservation: Supported 00:18:33.296 Namespace Sharing Capabilities: Multiple Controllers 00:18:33.296 Size (in LBAs): 131072 (0GiB) 00:18:33.296 Capacity (in LBAs): 131072 (0GiB) 00:18:33.296 Utilization (in LBAs): 131072 (0GiB) 00:18:33.296 NGUID: B6BC5C988B5C43B59956A039D40751EE 00:18:33.296 UUID: b6bc5c98-8b5c-43b5-9956-a039d40751ee 00:18:33.296 Thin Provisioning: Not Supported 00:18:33.296 Per-NS Atomic Units: Yes 00:18:33.296 Atomic Boundary Size (Normal): 0 00:18:33.296 Atomic Boundary Size (PFail): 0 00:18:33.296 Atomic Boundary Offset: 0 00:18:33.296 Maximum Single Source Range Length: 65535 00:18:33.296 Maximum Copy Length: 65535 00:18:33.296 Maximum Source Range Count: 1 00:18:33.296 NGUID/EUI64 Never Reused: No 00:18:33.296 Namespace Write Protected: No 00:18:33.296 Number of LBA Formats: 1 00:18:33.296 Current LBA Format: LBA Format #00 00:18:33.296 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:33.296 00:18:33.296 11:12:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:33.296 [2024-11-17 11:12:57.905448] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:38.595 Initializing NVMe Controllers 00:18:38.595 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:38.595 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:38.595 Initialization complete. Launching workers. 00:18:38.595 ======================================================== 00:18:38.595 Latency(us) 00:18:38.595 Device Information : IOPS MiB/s Average min max 00:18:38.595 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33704.48 131.66 3797.06 1171.28 9009.33 00:18:38.595 ======================================================== 00:18:38.595 Total : 33704.48 131.66 3797.06 1171.28 9009.33 00:18:38.595 00:18:38.595 [2024-11-17 11:13:03.017935] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:38.595 11:13:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:38.856 [2024-11-17 11:13:03.266636] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:44.144 Initializing NVMe Controllers 00:18:44.144 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:44.144 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:44.144 Initialization complete. Launching workers. 00:18:44.144 ======================================================== 00:18:44.144 Latency(us) 00:18:44.144 Device Information : IOPS MiB/s Average min max 00:18:44.144 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31531.15 123.17 4059.67 1244.83 7451.75 00:18:44.144 ======================================================== 00:18:44.144 Total : 31531.15 123.17 4059.67 1244.83 7451.75 00:18:44.144 00:18:44.144 [2024-11-17 11:13:08.293131] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:44.144 11:13:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:44.144 [2024-11-17 11:13:08.522011] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:49.444 [2024-11-17 11:13:13.643900] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:49.444 Initializing NVMe Controllers 00:18:49.444 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:49.444 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:49.444 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:49.444 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:49.444 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:49.444 Initialization complete. Launching workers. 00:18:49.444 Starting thread on core 2 00:18:49.444 Starting thread on core 3 00:18:49.444 Starting thread on core 1 00:18:49.444 11:13:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:49.444 [2024-11-17 11:13:13.956011] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.738 [2024-11-17 11:13:17.124858] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.738 Initializing NVMe Controllers 00:18:52.738 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.738 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.738 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:52.738 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:52.738 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:52.738 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:52.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:52.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:52.739 Initialization complete. Launching workers. 00:18:52.739 Starting thread on core 1 with urgent priority queue 00:18:52.739 Starting thread on core 2 with urgent priority queue 00:18:52.739 Starting thread on core 3 with urgent priority queue 00:18:52.739 Starting thread on core 0 with urgent priority queue 00:18:52.739 SPDK bdev Controller (SPDK2 ) core 0: 1306.00 IO/s 76.57 secs/100000 ios 00:18:52.739 SPDK bdev Controller (SPDK2 ) core 1: 1065.33 IO/s 93.87 secs/100000 ios 00:18:52.739 SPDK bdev Controller (SPDK2 ) core 2: 1382.33 IO/s 72.34 secs/100000 ios 00:18:52.739 SPDK bdev Controller (SPDK2 ) core 3: 1195.00 IO/s 83.68 secs/100000 ios 00:18:52.739 ======================================================== 00:18:52.739 00:18:52.739 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:52.998 [2024-11-17 11:13:17.441040] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.998 Initializing NVMe Controllers 00:18:52.998 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.998 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:52.998 Namespace ID: 1 size: 0GB 00:18:52.998 Initialization complete. 00:18:52.998 INFO: using host memory buffer for IO 00:18:52.998 Hello world! 00:18:52.998 [2024-11-17 11:13:17.453107] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.998 11:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:53.255 [2024-11-17 11:13:17.765825] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:54.636 Initializing NVMe Controllers 00:18:54.636 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:54.636 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:54.636 Initialization complete. Launching workers. 00:18:54.636 submit (in ns) avg, min, max = 8249.8, 3480.0, 4016295.6 00:18:54.636 complete (in ns) avg, min, max = 25709.6, 2065.6, 4017201.1 00:18:54.636 00:18:54.636 Submit histogram 00:18:54.636 ================ 00:18:54.636 Range in us Cumulative Count 00:18:54.636 3.461 - 3.484: 0.0077% ( 1) 00:18:54.636 3.484 - 3.508: 0.0310% ( 3) 00:18:54.636 3.508 - 3.532: 0.8907% ( 111) 00:18:54.636 3.532 - 3.556: 2.5405% ( 213) 00:18:54.637 3.556 - 3.579: 7.1877% ( 600) 00:18:54.637 3.579 - 3.603: 14.0578% ( 887) 00:18:54.637 3.603 - 3.627: 24.9477% ( 1406) 00:18:54.637 3.627 - 3.650: 35.1483% ( 1317) 00:18:54.637 3.650 - 3.674: 43.0486% ( 1020) 00:18:54.637 3.674 - 3.698: 49.0202% ( 771) 00:18:54.637 3.698 - 3.721: 55.7432% ( 868) 00:18:54.637 3.721 - 3.745: 60.9790% ( 676) 00:18:54.637 3.745 - 3.769: 65.7889% ( 621) 00:18:54.637 3.769 - 3.793: 69.2820% ( 451) 00:18:54.637 3.793 - 3.816: 72.4576% ( 410) 00:18:54.637 3.816 - 3.840: 75.5170% ( 395) 00:18:54.637 3.840 - 3.864: 79.3200% ( 491) 00:18:54.637 3.864 - 3.887: 83.0687% ( 484) 00:18:54.637 3.887 - 3.911: 85.9035% ( 366) 00:18:54.637 3.911 - 3.935: 87.8166% ( 247) 00:18:54.637 3.935 - 3.959: 89.4509% ( 211) 00:18:54.637 3.959 - 3.982: 91.1316% ( 217) 00:18:54.637 3.982 - 4.006: 92.6110% ( 191) 00:18:54.637 4.006 - 4.030: 93.5404% ( 120) 00:18:54.637 4.030 - 4.053: 94.6092% ( 138) 00:18:54.637 4.053 - 4.077: 95.2289% ( 80) 00:18:54.637 4.077 - 4.101: 95.7168% ( 63) 00:18:54.637 4.101 - 4.124: 96.1196% ( 52) 00:18:54.637 4.124 - 4.148: 96.4062% ( 37) 00:18:54.637 4.148 - 4.172: 96.6153% ( 27) 00:18:54.637 4.172 - 4.196: 96.6927% ( 10) 00:18:54.637 4.196 - 4.219: 96.7934% ( 13) 00:18:54.637 4.219 - 4.243: 96.8399% ( 6) 00:18:54.637 4.243 - 4.267: 96.9406% ( 13) 00:18:54.637 4.267 - 4.290: 97.0568% ( 15) 00:18:54.637 4.290 - 4.314: 97.1575% ( 13) 00:18:54.637 4.314 - 4.338: 97.1884% ( 4) 00:18:54.637 4.338 - 4.361: 97.2736% ( 11) 00:18:54.637 4.361 - 4.385: 97.3279% ( 7) 00:18:54.637 4.385 - 4.409: 97.4208% ( 12) 00:18:54.637 4.409 - 4.433: 97.4518% ( 4) 00:18:54.637 4.433 - 4.456: 97.4595% ( 1) 00:18:54.637 4.456 - 4.480: 97.4750% ( 2) 00:18:54.637 4.480 - 4.504: 97.4828% ( 1) 00:18:54.637 4.504 - 4.527: 97.4983% ( 2) 00:18:54.637 4.527 - 4.551: 97.5060% ( 1) 00:18:54.637 4.551 - 4.575: 97.5215% ( 2) 00:18:54.637 4.575 - 4.599: 97.5447% ( 3) 00:18:54.637 4.622 - 4.646: 97.5525% ( 1) 00:18:54.637 4.646 - 4.670: 97.6144% ( 8) 00:18:54.637 4.670 - 4.693: 97.6299% ( 2) 00:18:54.637 4.693 - 4.717: 97.6687% ( 5) 00:18:54.637 4.717 - 4.741: 97.7074% ( 5) 00:18:54.637 4.741 - 4.764: 97.7848% ( 10) 00:18:54.637 4.764 - 4.788: 97.8158% ( 4) 00:18:54.637 4.788 - 4.812: 97.8778% ( 8) 00:18:54.637 4.812 - 4.836: 97.9630% ( 11) 00:18:54.637 4.836 - 4.859: 97.9940% ( 4) 00:18:54.637 4.859 - 4.883: 98.0482% ( 7) 00:18:54.637 4.883 - 4.907: 98.0869% ( 5) 00:18:54.637 4.907 - 4.930: 98.1101% ( 3) 00:18:54.637 4.930 - 4.954: 98.1334% ( 3) 00:18:54.637 4.954 - 4.978: 98.1876% ( 7) 00:18:54.637 4.978 - 5.001: 98.2108% ( 3) 00:18:54.637 5.001 - 5.025: 98.2418% ( 4) 00:18:54.637 5.025 - 5.049: 98.2573% ( 2) 00:18:54.637 5.049 - 5.073: 98.2805% ( 3) 00:18:54.637 5.073 - 5.096: 98.2960% ( 2) 00:18:54.637 5.096 - 5.120: 98.3038% ( 1) 00:18:54.637 5.120 - 5.144: 98.3115% ( 1) 00:18:54.637 5.144 - 5.167: 98.3270% ( 2) 00:18:54.637 5.191 - 5.215: 98.3425% ( 2) 00:18:54.637 5.262 - 5.286: 98.3502% ( 1) 00:18:54.637 5.381 - 5.404: 98.3580% ( 1) 00:18:54.637 5.523 - 5.547: 98.3657% ( 1) 00:18:54.637 5.665 - 5.689: 98.3735% ( 1) 00:18:54.637 5.736 - 5.760: 98.3812% ( 1) 00:18:54.637 5.879 - 5.902: 98.3967% ( 2) 00:18:54.637 5.926 - 5.950: 98.4045% ( 1) 00:18:54.637 6.400 - 6.447: 98.4122% ( 1) 00:18:54.637 6.542 - 6.590: 98.4200% ( 1) 00:18:54.637 6.637 - 6.684: 98.4277% ( 1) 00:18:54.637 6.732 - 6.779: 98.4354% ( 1) 00:18:54.637 6.779 - 6.827: 98.4432% ( 1) 00:18:54.637 6.969 - 7.016: 98.4509% ( 1) 00:18:54.637 7.111 - 7.159: 98.4587% ( 1) 00:18:54.637 7.253 - 7.301: 98.4664% ( 1) 00:18:54.637 7.301 - 7.348: 98.4742% ( 1) 00:18:54.637 7.490 - 7.538: 98.4897% ( 2) 00:18:54.637 7.538 - 7.585: 98.4974% ( 1) 00:18:54.637 7.727 - 7.775: 98.5052% ( 1) 00:18:54.637 7.775 - 7.822: 98.5129% ( 1) 00:18:54.637 7.917 - 7.964: 98.5206% ( 1) 00:18:54.637 7.964 - 8.012: 98.5361% ( 2) 00:18:54.637 8.012 - 8.059: 98.5594% ( 3) 00:18:54.637 8.059 - 8.107: 98.5671% ( 1) 00:18:54.637 8.154 - 8.201: 98.5749% ( 1) 00:18:54.637 8.201 - 8.249: 98.5903% ( 2) 00:18:54.637 8.249 - 8.296: 98.5981% ( 1) 00:18:54.637 8.296 - 8.344: 98.6058% ( 1) 00:18:54.637 8.344 - 8.391: 98.6136% ( 1) 00:18:54.637 8.391 - 8.439: 98.6291% ( 2) 00:18:54.637 8.439 - 8.486: 98.6368% ( 1) 00:18:54.637 8.486 - 8.533: 98.6446% ( 1) 00:18:54.637 8.533 - 8.581: 98.6601% ( 2) 00:18:54.637 8.581 - 8.628: 98.6678% ( 1) 00:18:54.637 8.628 - 8.676: 98.6755% ( 1) 00:18:54.637 8.770 - 8.818: 98.6833% ( 1) 00:18:54.637 8.865 - 8.913: 98.6910% ( 1) 00:18:54.637 8.913 - 8.960: 98.6988% ( 1) 00:18:54.637 9.007 - 9.055: 98.7065% ( 1) 00:18:54.637 9.150 - 9.197: 98.7143% ( 1) 00:18:54.637 9.292 - 9.339: 98.7220% ( 1) 00:18:54.637 9.387 - 9.434: 98.7298% ( 1) 00:18:54.637 9.434 - 9.481: 98.7375% ( 1) 00:18:54.637 9.481 - 9.529: 98.7453% ( 1) 00:18:54.637 9.529 - 9.576: 98.7530% ( 1) 00:18:54.637 9.956 - 10.003: 98.7607% ( 1) 00:18:54.637 10.145 - 10.193: 98.7685% ( 1) 00:18:54.637 10.193 - 10.240: 98.7840% ( 2) 00:18:54.637 10.335 - 10.382: 98.7917% ( 1) 00:18:54.637 10.524 - 10.572: 98.7995% ( 1) 00:18:54.637 10.619 - 10.667: 98.8072% ( 1) 00:18:54.637 10.809 - 10.856: 98.8227% ( 2) 00:18:54.637 11.188 - 11.236: 98.8305% ( 1) 00:18:54.637 11.567 - 11.615: 98.8382% ( 1) 00:18:54.637 11.804 - 11.852: 98.8537% ( 2) 00:18:54.637 11.899 - 11.947: 98.8614% ( 1) 00:18:54.637 12.231 - 12.326: 98.8847% ( 3) 00:18:54.637 12.421 - 12.516: 98.8924% ( 1) 00:18:54.637 12.516 - 12.610: 98.9079% ( 2) 00:18:54.637 12.990 - 13.084: 98.9389% ( 4) 00:18:54.637 13.464 - 13.559: 98.9466% ( 1) 00:18:54.637 13.653 - 13.748: 98.9544% ( 1) 00:18:54.637 13.748 - 13.843: 98.9621% ( 1) 00:18:54.637 13.938 - 14.033: 98.9854% ( 3) 00:18:54.637 14.127 - 14.222: 98.9931% ( 1) 00:18:54.637 14.317 - 14.412: 99.0086% ( 2) 00:18:54.637 14.601 - 14.696: 99.0241% ( 2) 00:18:54.637 15.076 - 15.170: 99.0318% ( 1) 00:18:54.637 16.877 - 16.972: 99.0396% ( 1) 00:18:54.637 16.972 - 17.067: 99.0551% ( 2) 00:18:54.637 17.067 - 17.161: 99.0628% ( 1) 00:18:54.637 17.256 - 17.351: 99.0783% ( 2) 00:18:54.637 17.351 - 17.446: 99.1170% ( 5) 00:18:54.637 17.446 - 17.541: 99.1558% ( 5) 00:18:54.637 17.541 - 17.636: 99.1945% ( 5) 00:18:54.637 17.636 - 17.730: 99.2410% ( 6) 00:18:54.637 17.730 - 17.825: 99.2719% ( 4) 00:18:54.637 17.825 - 17.920: 99.3339% ( 8) 00:18:54.637 17.920 - 18.015: 99.3959% ( 8) 00:18:54.637 18.015 - 18.110: 99.4423% ( 6) 00:18:54.637 18.110 - 18.204: 99.4733% ( 4) 00:18:54.637 18.204 - 18.299: 99.5430% ( 9) 00:18:54.637 18.299 - 18.394: 99.6127% ( 9) 00:18:54.637 18.394 - 18.489: 99.6437% ( 4) 00:18:54.637 18.489 - 18.584: 99.6902% ( 6) 00:18:54.637 18.584 - 18.679: 99.7134% ( 3) 00:18:54.637 18.679 - 18.773: 99.7521% ( 5) 00:18:54.637 18.773 - 18.868: 99.7754% ( 3) 00:18:54.637 18.868 - 18.963: 99.7909% ( 2) 00:18:54.637 18.963 - 19.058: 99.8064% ( 2) 00:18:54.637 19.342 - 19.437: 99.8141% ( 1) 00:18:54.637 19.532 - 19.627: 99.8219% ( 1) 00:18:54.637 21.049 - 21.144: 99.8296% ( 1) 00:18:54.637 21.713 - 21.807: 99.8373% ( 1) 00:18:54.637 23.893 - 23.988: 99.8451% ( 1) 00:18:54.637 24.178 - 24.273: 99.8528% ( 1) 00:18:54.637 24.652 - 24.841: 99.8606% ( 1) 00:18:54.637 27.307 - 27.496: 99.8761% ( 2) 00:18:54.637 28.444 - 28.634: 99.8838% ( 1) 00:18:54.637 32.237 - 32.427: 99.8916% ( 1) 00:18:54.637 3980.705 - 4004.978: 99.9923% ( 13) 00:18:54.637 4004.978 - 4029.250: 100.0000% ( 1) 00:18:54.637 00:18:54.637 Complete histogram 00:18:54.637 ================== 00:18:54.637 Range in us Cumulative Count 00:18:54.637 2.062 - 2.074: 4.0276% ( 520) 00:18:54.637 2.074 - 2.086: 40.9883% ( 4772) 00:18:54.637 2.086 - 2.098: 47.4634% ( 836) 00:18:54.637 2.098 - 2.110: 52.3352% ( 629) 00:18:54.637 2.110 - 2.121: 60.7389% ( 1085) 00:18:54.637 2.121 - 2.133: 62.4429% ( 220) 00:18:54.637 2.133 - 2.145: 69.5299% ( 915) 00:18:54.637 2.145 - 2.157: 81.5816% ( 1556) 00:18:54.637 2.157 - 2.169: 83.2778% ( 219) 00:18:54.637 2.169 - 2.181: 85.8028% ( 326) 00:18:54.637 2.181 - 2.193: 88.3045% ( 323) 00:18:54.637 2.193 - 2.204: 89.0249% ( 93) 00:18:54.637 2.204 - 2.216: 90.2719% ( 161) 00:18:54.637 2.216 - 2.228: 91.9836% ( 221) 00:18:54.637 2.228 - 2.240: 93.6953% ( 221) 00:18:54.637 2.240 - 2.252: 94.6402% ( 122) 00:18:54.637 2.252 - 2.264: 94.9191% ( 36) 00:18:54.637 2.264 - 2.276: 95.0585% ( 18) 00:18:54.637 2.276 - 2.287: 95.2211% ( 21) 00:18:54.637 2.287 - 2.299: 95.4148% ( 25) 00:18:54.637 2.299 - 2.311: 95.7788% ( 47) 00:18:54.637 2.311 - 2.323: 95.9414% ( 21) 00:18:54.637 2.323 - 2.335: 95.9724% ( 4) 00:18:54.637 2.335 - 2.347: 95.9802% ( 1) 00:18:54.637 2.347 - 2.359: 96.0034% ( 3) 00:18:54.637 2.359 - 2.370: 96.0809% ( 10) 00:18:54.637 2.370 - 2.382: 96.1816% ( 13) 00:18:54.637 2.382 - 2.394: 96.3597% ( 23) 00:18:54.637 2.394 - 2.406: 96.5921% ( 30) 00:18:54.637 2.406 - 2.418: 96.7392% ( 19) 00:18:54.637 2.418 - 2.430: 96.9174% ( 23) 00:18:54.637 2.430 - 2.441: 97.1110% ( 25) 00:18:54.637 2.441 - 2.453: 97.3434% ( 30) 00:18:54.637 2.453 - 2.465: 97.4983% ( 20) 00:18:54.637 2.465 - 2.477: 97.6841% ( 24) 00:18:54.637 2.477 - 2.489: 97.7926% ( 14) 00:18:54.637 2.489 - 2.501: 97.9320% ( 18) 00:18:54.637 2.501 - 2.513: 98.0559% ( 16) 00:18:54.637 2.513 - 2.524: 98.1256% ( 9) 00:18:54.637 2.524 - 2.536: 98.1953% ( 9) 00:18:54.637 2.536 - 2.548: 98.2960% ( 13) 00:18:54.637 2.548 - 2.560: 98.3193% ( 3) 00:18:54.637 2.560 - 2.572: 98.3890% ( 9) 00:18:54.637 2.572 - 2.584: 98.4277% ( 5) 00:18:54.637 2.584 - 2.596: 98.4509% ( 3) 00:18:54.637 2.596 - 2.607: 98.4819% ( 4) 00:18:54.637 2.607 - 2.619: 98.4974% ( 2) 00:18:54.637 2.619 - 2.631: 98.5052% ( 1) 00:18:54.637 2.631 - 2.643: 98.5129% ( 1) 00:18:54.637 2.643 - 2.655: 98.5206% ( 1) 00:18:54.637 2.655 - 2.667: 98.5284% ( 1) 00:18:54.637 2.667 - 2.679: 98.5361% ( 1) 00:18:54.637 2.761 - 2.773: 98.5439% ( 1) 00:18:54.637 2.821 - 2.833: 98.5516% ( 1) 00:18:54.637 2.868 - 2.880: 98.5594% ( 1) 00:18:54.637 3.484 - 3.508: 98.5671% ( 1) 00:18:54.637 3.508 - 3.532: 98.5749% ( 1) 00:18:54.637 3.556 - 3.579: 98.5903% ( 2) 00:18:54.637 3.579 - 3.603: 98.5981% ( 1) 00:18:54.637 3.698 - 3.721: 98.6136% ( 2) 00:18:54.637 3.721 - 3.745: 98.6368% ( 3) 00:18:54.637 3.769 - 3.793: 98.6523% ( 2) 00:18:54.637 3.793 - 3.816: 98.6678% ( 2) 00:18:54.637 3.840 - 3.864: 98.6755% ( 1) 00:18:54.637 3.864 - 3.887: 98.6833% ( 1) 00:18:54.637 3.887 - 3.911: 98.6910% ( 1) 00:18:54.637 3.911 - 3.935: 98.7065% ( 2) 00:18:54.637 3.935 - 3.959: 98.7143% ( 1) 00:18:54.637 3.982 - 4.006: 98.7220% ( 1) 00:18:54.637 4.030 - 4.053: 98.7298% ( 1) 00:18:54.637 4.077 - 4.101: 98.7375% ( 1) 00:18:54.637 4.101 - 4.124: 98.7607% ( 3) 00:18:54.637 4.148 - 4.172: 98.7685% ( 1) 00:18:54.637 4.196 - 4.219: 98.7762% ( 1) 00:18:54.637 4.267 - 4.290: 98.7840% ( 1) 00:18:54.637 4.314 - 4.338: 98.7917% ( 1) 00:18:54.637 6.116 - 6.163: 98.8072% ( 2) 00:18:54.638 6.353 - 6.400: 9[2024-11-17 11:13:18.871349] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:54.638 8.8150% ( 1) 00:18:54.638 6.400 - 6.447: 98.8227% ( 1) 00:18:54.638 6.684 - 6.732: 98.8305% ( 1) 00:18:54.638 6.921 - 6.969: 98.8382% ( 1) 00:18:54.638 7.064 - 7.111: 98.8459% ( 1) 00:18:54.638 7.253 - 7.301: 98.8692% ( 3) 00:18:54.638 7.396 - 7.443: 98.8769% ( 1) 00:18:54.638 7.490 - 7.538: 98.8847% ( 1) 00:18:54.638 7.538 - 7.585: 98.8924% ( 1) 00:18:54.638 7.964 - 8.012: 98.9002% ( 1) 00:18:54.638 8.107 - 8.154: 98.9079% ( 1) 00:18:54.638 8.439 - 8.486: 98.9157% ( 1) 00:18:54.638 8.581 - 8.628: 98.9234% ( 1) 00:18:54.638 8.628 - 8.676: 98.9311% ( 1) 00:18:54.638 8.913 - 8.960: 98.9389% ( 1) 00:18:54.638 9.007 - 9.055: 98.9466% ( 1) 00:18:54.638 9.624 - 9.671: 98.9544% ( 1) 00:18:54.638 10.382 - 10.430: 98.9621% ( 1) 00:18:54.638 13.653 - 13.748: 98.9699% ( 1) 00:18:54.638 15.644 - 15.739: 98.9776% ( 1) 00:18:54.638 15.739 - 15.834: 99.0086% ( 4) 00:18:54.638 15.834 - 15.929: 99.0318% ( 3) 00:18:54.638 15.929 - 16.024: 99.0396% ( 1) 00:18:54.638 16.024 - 16.119: 99.0628% ( 3) 00:18:54.638 16.119 - 16.213: 99.0938% ( 4) 00:18:54.638 16.213 - 16.308: 99.1015% ( 1) 00:18:54.638 16.308 - 16.403: 99.1403% ( 5) 00:18:54.638 16.403 - 16.498: 99.1867% ( 6) 00:18:54.638 16.498 - 16.593: 99.2332% ( 6) 00:18:54.638 16.593 - 16.687: 99.2797% ( 6) 00:18:54.638 16.687 - 16.782: 99.2952% ( 2) 00:18:54.638 16.782 - 16.877: 99.3107% ( 2) 00:18:54.638 16.877 - 16.972: 99.3262% ( 2) 00:18:54.638 17.067 - 17.161: 99.3416% ( 2) 00:18:54.638 17.161 - 17.256: 99.3494% ( 1) 00:18:54.638 17.351 - 17.446: 99.3571% ( 1) 00:18:54.638 17.636 - 17.730: 99.3649% ( 1) 00:18:54.638 17.920 - 18.015: 99.3726% ( 1) 00:18:54.638 18.204 - 18.299: 99.3804% ( 1) 00:18:54.638 18.489 - 18.584: 99.3881% ( 1) 00:18:54.638 26.548 - 26.738: 99.3959% ( 1) 00:18:54.638 39.443 - 39.633: 99.4036% ( 1) 00:18:54.638 47.787 - 47.976: 99.4114% ( 1) 00:18:54.638 3325.345 - 3349.618: 99.4191% ( 1) 00:18:54.638 3980.705 - 4004.978: 99.9148% ( 64) 00:18:54.638 4004.978 - 4029.250: 100.0000% ( 11) 00:18:54.638 00:18:54.638 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:54.638 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:54.638 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:54.638 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:54.638 11:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:54.638 [ 00:18:54.638 { 00:18:54.638 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:54.638 "subtype": "Discovery", 00:18:54.638 "listen_addresses": [], 00:18:54.638 "allow_any_host": true, 00:18:54.638 "hosts": [] 00:18:54.638 }, 00:18:54.638 { 00:18:54.638 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:54.638 "subtype": "NVMe", 00:18:54.638 "listen_addresses": [ 00:18:54.638 { 00:18:54.638 "trtype": "VFIOUSER", 00:18:54.638 "adrfam": "IPv4", 00:18:54.638 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:54.638 "trsvcid": "0" 00:18:54.638 } 00:18:54.638 ], 00:18:54.638 "allow_any_host": true, 00:18:54.638 "hosts": [], 00:18:54.638 "serial_number": "SPDK1", 00:18:54.638 "model_number": "SPDK bdev Controller", 00:18:54.638 "max_namespaces": 32, 00:18:54.638 "min_cntlid": 1, 00:18:54.638 "max_cntlid": 65519, 00:18:54.638 "namespaces": [ 00:18:54.638 { 00:18:54.638 "nsid": 1, 00:18:54.638 "bdev_name": "Malloc1", 00:18:54.638 "name": "Malloc1", 00:18:54.638 "nguid": "A5B30648610649EC905036ED4AA9A34E", 00:18:54.638 "uuid": "a5b30648-6106-49ec-9050-36ed4aa9a34e" 00:18:54.638 }, 00:18:54.638 { 00:18:54.638 "nsid": 2, 00:18:54.638 "bdev_name": "Malloc3", 00:18:54.638 "name": "Malloc3", 00:18:54.638 "nguid": "017BAF632DDE4A46950FF67DF788C695", 00:18:54.638 "uuid": "017baf63-2dde-4a46-950f-f67df788c695" 00:18:54.638 } 00:18:54.638 ] 00:18:54.638 }, 00:18:54.638 { 00:18:54.638 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:54.638 "subtype": "NVMe", 00:18:54.638 "listen_addresses": [ 00:18:54.638 { 00:18:54.638 "trtype": "VFIOUSER", 00:18:54.638 "adrfam": "IPv4", 00:18:54.638 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:54.638 "trsvcid": "0" 00:18:54.638 } 00:18:54.638 ], 00:18:54.638 "allow_any_host": true, 00:18:54.638 "hosts": [], 00:18:54.638 "serial_number": "SPDK2", 00:18:54.638 "model_number": "SPDK bdev Controller", 00:18:54.638 "max_namespaces": 32, 00:18:54.638 "min_cntlid": 1, 00:18:54.638 "max_cntlid": 65519, 00:18:54.638 "namespaces": [ 00:18:54.638 { 00:18:54.638 "nsid": 1, 00:18:54.638 "bdev_name": "Malloc2", 00:18:54.638 "name": "Malloc2", 00:18:54.638 "nguid": "B6BC5C988B5C43B59956A039D40751EE", 00:18:54.638 "uuid": "b6bc5c98-8b5c-43b5-9956-a039d40751ee" 00:18:54.638 } 00:18:54.638 ] 00:18:54.638 } 00:18:54.638 ] 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=232993 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:54.638 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:54.905 [2024-11-17 11:13:19.422253] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:54.905 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:55.163 Malloc4 00:18:55.163 11:13:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:55.422 [2024-11-17 11:13:20.046925] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:55.422 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:55.682 Asynchronous Event Request test 00:18:55.682 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.682 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:55.682 Registering asynchronous event callbacks... 00:18:55.682 Starting namespace attribute notice tests for all controllers... 00:18:55.682 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:55.682 aer_cb - Changed Namespace 00:18:55.682 Cleaning up... 00:18:55.682 [ 00:18:55.682 { 00:18:55.682 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:55.682 "subtype": "Discovery", 00:18:55.682 "listen_addresses": [], 00:18:55.682 "allow_any_host": true, 00:18:55.682 "hosts": [] 00:18:55.682 }, 00:18:55.682 { 00:18:55.682 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:55.682 "subtype": "NVMe", 00:18:55.682 "listen_addresses": [ 00:18:55.682 { 00:18:55.682 "trtype": "VFIOUSER", 00:18:55.682 "adrfam": "IPv4", 00:18:55.682 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:55.682 "trsvcid": "0" 00:18:55.682 } 00:18:55.682 ], 00:18:55.682 "allow_any_host": true, 00:18:55.682 "hosts": [], 00:18:55.682 "serial_number": "SPDK1", 00:18:55.682 "model_number": "SPDK bdev Controller", 00:18:55.682 "max_namespaces": 32, 00:18:55.682 "min_cntlid": 1, 00:18:55.682 "max_cntlid": 65519, 00:18:55.682 "namespaces": [ 00:18:55.682 { 00:18:55.682 "nsid": 1, 00:18:55.682 "bdev_name": "Malloc1", 00:18:55.682 "name": "Malloc1", 00:18:55.682 "nguid": "A5B30648610649EC905036ED4AA9A34E", 00:18:55.682 "uuid": "a5b30648-6106-49ec-9050-36ed4aa9a34e" 00:18:55.682 }, 00:18:55.682 { 00:18:55.682 "nsid": 2, 00:18:55.682 "bdev_name": "Malloc3", 00:18:55.682 "name": "Malloc3", 00:18:55.682 "nguid": "017BAF632DDE4A46950FF67DF788C695", 00:18:55.682 "uuid": "017baf63-2dde-4a46-950f-f67df788c695" 00:18:55.682 } 00:18:55.682 ] 00:18:55.682 }, 00:18:55.682 { 00:18:55.682 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:55.682 "subtype": "NVMe", 00:18:55.682 "listen_addresses": [ 00:18:55.682 { 00:18:55.682 "trtype": "VFIOUSER", 00:18:55.682 "adrfam": "IPv4", 00:18:55.682 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:55.682 "trsvcid": "0" 00:18:55.682 } 00:18:55.682 ], 00:18:55.682 "allow_any_host": true, 00:18:55.682 "hosts": [], 00:18:55.682 "serial_number": "SPDK2", 00:18:55.682 "model_number": "SPDK bdev Controller", 00:18:55.682 "max_namespaces": 32, 00:18:55.682 "min_cntlid": 1, 00:18:55.682 "max_cntlid": 65519, 00:18:55.682 "namespaces": [ 00:18:55.682 { 00:18:55.682 "nsid": 1, 00:18:55.682 "bdev_name": "Malloc2", 00:18:55.682 "name": "Malloc2", 00:18:55.682 "nguid": "B6BC5C988B5C43B59956A039D40751EE", 00:18:55.682 "uuid": "b6bc5c98-8b5c-43b5-9956-a039d40751ee" 00:18:55.682 }, 00:18:55.682 { 00:18:55.682 "nsid": 2, 00:18:55.682 "bdev_name": "Malloc4", 00:18:55.682 "name": "Malloc4", 00:18:55.682 "nguid": "BC4A75E2FAD346C5A80219BC9E570FB5", 00:18:55.682 "uuid": "bc4a75e2-fad3-46c5-a802-19bc9e570fb5" 00:18:55.682 } 00:18:55.682 ] 00:18:55.682 } 00:18:55.682 ] 00:18:55.682 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 232993 00:18:55.682 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:55.682 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 227243 00:18:55.682 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 227243 ']' 00:18:55.682 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 227243 00:18:55.682 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227243 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227243' 00:18:55.943 killing process with pid 227243 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 227243 00:18:55.943 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 227243 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=233142 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 233142' 00:18:56.205 Process pid: 233142 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 233142 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 233142 ']' 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.205 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:56.205 [2024-11-17 11:13:20.743737] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:56.205 [2024-11-17 11:13:20.745000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:18:56.205 [2024-11-17 11:13:20.745063] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.205 [2024-11-17 11:13:20.818968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.465 [2024-11-17 11:13:20.868184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.465 [2024-11-17 11:13:20.868252] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.465 [2024-11-17 11:13:20.868280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.465 [2024-11-17 11:13:20.868291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.465 [2024-11-17 11:13:20.868301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.465 [2024-11-17 11:13:20.873545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.465 [2024-11-17 11:13:20.873614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.465 [2024-11-17 11:13:20.873677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.465 [2024-11-17 11:13:20.873681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.465 [2024-11-17 11:13:20.966268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:56.465 [2024-11-17 11:13:20.966462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:56.465 [2024-11-17 11:13:20.966756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:56.465 [2024-11-17 11:13:20.967395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:56.465 [2024-11-17 11:13:20.967652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:56.465 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.465 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:56.465 11:13:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:57.405 11:13:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:57.975 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:57.975 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:57.975 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:57.975 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:57.975 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:58.234 Malloc1 00:18:58.234 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:58.560 11:13:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:58.818 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:59.076 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:59.076 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:59.076 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:59.335 Malloc2 00:18:59.335 11:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:59.593 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:59.851 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:00.110 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:00.110 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 233142 00:19:00.110 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 233142 ']' 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 233142 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233142 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233142' 00:19:00.111 killing process with pid 233142 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 233142 00:19:00.111 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 233142 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:00.369 00:19:00.369 real 0m54.794s 00:19:00.369 user 3m31.938s 00:19:00.369 sys 0m3.941s 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:00.369 ************************************ 00:19:00.369 END TEST nvmf_vfio_user 00:19:00.369 ************************************ 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.369 ************************************ 00:19:00.369 START TEST nvmf_vfio_user_nvme_compliance 00:19:00.369 ************************************ 00:19:00.369 11:13:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:00.369 * Looking for test storage... 00:19:00.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:00.369 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.629 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:00.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.630 --rc genhtml_branch_coverage=1 00:19:00.630 --rc genhtml_function_coverage=1 00:19:00.630 --rc genhtml_legend=1 00:19:00.630 --rc geninfo_all_blocks=1 00:19:00.630 --rc geninfo_unexecuted_blocks=1 00:19:00.630 00:19:00.630 ' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:00.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.630 --rc genhtml_branch_coverage=1 00:19:00.630 --rc genhtml_function_coverage=1 00:19:00.630 --rc genhtml_legend=1 00:19:00.630 --rc geninfo_all_blocks=1 00:19:00.630 --rc geninfo_unexecuted_blocks=1 00:19:00.630 00:19:00.630 ' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:00.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.630 --rc genhtml_branch_coverage=1 00:19:00.630 --rc genhtml_function_coverage=1 00:19:00.630 --rc genhtml_legend=1 00:19:00.630 --rc geninfo_all_blocks=1 00:19:00.630 --rc geninfo_unexecuted_blocks=1 00:19:00.630 00:19:00.630 ' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:00.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.630 --rc genhtml_branch_coverage=1 00:19:00.630 --rc genhtml_function_coverage=1 00:19:00.630 --rc genhtml_legend=1 00:19:00.630 --rc geninfo_all_blocks=1 00:19:00.630 --rc geninfo_unexecuted_blocks=1 00:19:00.630 00:19:00.630 ' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=233750 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:00.630 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 233750' 00:19:00.630 Process pid: 233750 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 233750 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 233750 ']' 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.631 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:00.631 [2024-11-17 11:13:25.190148] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:19:00.631 [2024-11-17 11:13:25.190237] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.631 [2024-11-17 11:13:25.257019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.889 [2024-11-17 11:13:25.306296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.889 [2024-11-17 11:13:25.306349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.889 [2024-11-17 11:13:25.306378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.889 [2024-11-17 11:13:25.306390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.889 [2024-11-17 11:13:25.306400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.889 [2024-11-17 11:13:25.307761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.889 [2024-11-17 11:13:25.307825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.889 [2024-11-17 11:13:25.307828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.889 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.889 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:00.889 11:13:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.824 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.083 malloc0 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.083 11:13:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:02.083 00:19:02.083 00:19:02.083 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.083 http://cunit.sourceforge.net/ 00:19:02.083 00:19:02.083 00:19:02.083 Suite: nvme_compliance 00:19:02.083 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-17 11:13:26.685133] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.083 [2024-11-17 11:13:26.686628] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:02.083 [2024-11-17 11:13:26.686654] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:02.083 [2024-11-17 11:13:26.686668] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:02.083 [2024-11-17 11:13:26.688146] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.083 passed 00:19:02.341 Test: admin_identify_ctrlr_verify_fused ...[2024-11-17 11:13:26.770734] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.341 [2024-11-17 11:13:26.773749] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.341 passed 00:19:02.341 Test: admin_identify_ns ...[2024-11-17 11:13:26.864267] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.341 [2024-11-17 11:13:26.923558] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:02.341 [2024-11-17 11:13:26.931542] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:02.341 [2024-11-17 11:13:26.952671] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.341 passed 00:19:02.599 Test: admin_get_features_mandatory_features ...[2024-11-17 11:13:27.037107] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.599 [2024-11-17 11:13:27.040129] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.599 passed 00:19:02.599 Test: admin_get_features_optional_features ...[2024-11-17 11:13:27.122679] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.599 [2024-11-17 11:13:27.125704] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.599 passed 00:19:02.599 Test: admin_set_features_number_of_queues ...[2024-11-17 11:13:27.208889] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.857 [2024-11-17 11:13:27.313643] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.857 passed 00:19:02.857 Test: admin_get_log_page_mandatory_logs ...[2024-11-17 11:13:27.402778] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:02.857 [2024-11-17 11:13:27.405798] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.857 passed 00:19:02.857 Test: admin_get_log_page_with_lpo ...[2024-11-17 11:13:27.487117] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.115 [2024-11-17 11:13:27.555540] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:03.115 [2024-11-17 11:13:27.567630] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.115 passed 00:19:03.115 Test: fabric_property_get ...[2024-11-17 11:13:27.648234] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.115 [2024-11-17 11:13:27.649530] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:03.115 [2024-11-17 11:13:27.653269] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.115 passed 00:19:03.115 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-17 11:13:27.736840] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.115 [2024-11-17 11:13:27.738113] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:03.115 [2024-11-17 11:13:27.739870] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.373 passed 00:19:03.374 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-17 11:13:27.825398] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.374 [2024-11-17 11:13:27.910564] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:03.374 [2024-11-17 11:13:27.926551] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:03.374 [2024-11-17 11:13:27.931633] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.374 passed 00:19:03.374 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-17 11:13:28.016262] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.374 [2024-11-17 11:13:28.017572] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:03.374 [2024-11-17 11:13:28.019283] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.632 passed 00:19:03.632 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-17 11:13:28.101130] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.632 [2024-11-17 11:13:28.176533] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:03.632 [2024-11-17 11:13:28.200538] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:03.632 [2024-11-17 11:13:28.205630] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.632 passed 00:19:03.632 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-17 11:13:28.287837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.890 [2024-11-17 11:13:28.289183] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:03.890 [2024-11-17 11:13:28.289221] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:03.890 [2024-11-17 11:13:28.290858] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:03.890 passed 00:19:03.890 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-17 11:13:28.376182] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:03.890 [2024-11-17 11:13:28.468548] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:03.890 [2024-11-17 11:13:28.476553] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:03.890 [2024-11-17 11:13:28.484565] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:03.890 [2024-11-17 11:13:28.492549] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:03.890 [2024-11-17 11:13:28.521645] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.147 passed 00:19:04.147 Test: admin_create_io_sq_verify_pc ...[2024-11-17 11:13:28.605233] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:04.147 [2024-11-17 11:13:28.621548] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:04.147 [2024-11-17 11:13:28.639563] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:04.147 passed 00:19:04.147 Test: admin_create_io_qp_max_qps ...[2024-11-17 11:13:28.724131] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.518 [2024-11-17 11:13:29.831544] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:05.777 [2024-11-17 11:13:30.200359] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:05.777 passed 00:19:05.777 Test: admin_create_io_sq_shared_cq ...[2024-11-17 11:13:30.290590] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:05.777 [2024-11-17 11:13:30.424548] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:06.035 [2024-11-17 11:13:30.461628] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:06.035 passed 00:19:06.035 00:19:06.035 Run Summary: Type Total Ran Passed Failed Inactive 00:19:06.035 suites 1 1 n/a 0 0 00:19:06.035 tests 18 18 18 0 0 00:19:06.035 asserts 360 360 360 0 n/a 00:19:06.035 00:19:06.035 Elapsed time = 1.569 seconds 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 233750 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 233750 ']' 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 233750 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 233750 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 233750' 00:19:06.035 killing process with pid 233750 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 233750 00:19:06.035 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 233750 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:06.294 00:19:06.294 real 0m5.788s 00:19:06.294 user 0m16.226s 00:19:06.294 sys 0m0.566s 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:06.294 ************************************ 00:19:06.294 END TEST nvmf_vfio_user_nvme_compliance 00:19:06.294 ************************************ 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.294 ************************************ 00:19:06.294 START TEST nvmf_vfio_user_fuzz 00:19:06.294 ************************************ 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:06.294 * Looking for test storage... 00:19:06.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:06.294 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:06.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.295 --rc genhtml_branch_coverage=1 00:19:06.295 --rc genhtml_function_coverage=1 00:19:06.295 --rc genhtml_legend=1 00:19:06.295 --rc geninfo_all_blocks=1 00:19:06.295 --rc geninfo_unexecuted_blocks=1 00:19:06.295 00:19:06.295 ' 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:06.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.295 --rc genhtml_branch_coverage=1 00:19:06.295 --rc genhtml_function_coverage=1 00:19:06.295 --rc genhtml_legend=1 00:19:06.295 --rc geninfo_all_blocks=1 00:19:06.295 --rc geninfo_unexecuted_blocks=1 00:19:06.295 00:19:06.295 ' 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:06.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.295 --rc genhtml_branch_coverage=1 00:19:06.295 --rc genhtml_function_coverage=1 00:19:06.295 --rc genhtml_legend=1 00:19:06.295 --rc geninfo_all_blocks=1 00:19:06.295 --rc geninfo_unexecuted_blocks=1 00:19:06.295 00:19:06.295 ' 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:06.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.295 --rc genhtml_branch_coverage=1 00:19:06.295 --rc genhtml_function_coverage=1 00:19:06.295 --rc genhtml_legend=1 00:19:06.295 --rc geninfo_all_blocks=1 00:19:06.295 --rc geninfo_unexecuted_blocks=1 00:19:06.295 00:19:06.295 ' 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.295 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=234484 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 234484' 00:19:06.554 Process pid: 234484 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 234484 00:19:06.554 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 234484 ']' 00:19:06.555 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.555 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.555 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.555 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.555 11:13:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:06.813 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.813 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:06.813 11:13:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.748 malloc0 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:07.748 11:13:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:39.835 Fuzzing completed. Shutting down the fuzz application 00:19:39.835 00:19:39.835 Dumping successful admin opcodes: 00:19:39.835 8, 9, 10, 24, 00:19:39.835 Dumping successful io opcodes: 00:19:39.835 0, 00:19:39.835 NS: 0x20000081ef00 I/O qp, Total commands completed: 676122, total successful commands: 2633, random_seed: 3338437824 00:19:39.835 NS: 0x20000081ef00 admin qp, Total commands completed: 86682, total successful commands: 692, random_seed: 1415795328 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 234484 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 234484 ']' 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 234484 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234484 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234484' 00:19:39.835 killing process with pid 234484 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 234484 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 234484 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:39.835 00:19:39.835 real 0m32.109s 00:19:39.835 user 0m34.120s 00:19:39.835 sys 0m25.694s 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:39.835 ************************************ 00:19:39.835 END TEST nvmf_vfio_user_fuzz 00:19:39.835 ************************************ 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.835 ************************************ 00:19:39.835 START TEST nvmf_auth_target 00:19:39.835 ************************************ 00:19:39.835 11:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:39.835 * Looking for test storage... 00:19:39.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.835 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:39.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.836 --rc genhtml_branch_coverage=1 00:19:39.836 --rc genhtml_function_coverage=1 00:19:39.836 --rc genhtml_legend=1 00:19:39.836 --rc geninfo_all_blocks=1 00:19:39.836 --rc geninfo_unexecuted_blocks=1 00:19:39.836 00:19:39.836 ' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:39.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.836 --rc genhtml_branch_coverage=1 00:19:39.836 --rc genhtml_function_coverage=1 00:19:39.836 --rc genhtml_legend=1 00:19:39.836 --rc geninfo_all_blocks=1 00:19:39.836 --rc geninfo_unexecuted_blocks=1 00:19:39.836 00:19:39.836 ' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:39.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.836 --rc genhtml_branch_coverage=1 00:19:39.836 --rc genhtml_function_coverage=1 00:19:39.836 --rc genhtml_legend=1 00:19:39.836 --rc geninfo_all_blocks=1 00:19:39.836 --rc geninfo_unexecuted_blocks=1 00:19:39.836 00:19:39.836 ' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:39.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.836 --rc genhtml_branch_coverage=1 00:19:39.836 --rc genhtml_function_coverage=1 00:19:39.836 --rc genhtml_legend=1 00:19:39.836 --rc geninfo_all_blocks=1 00:19:39.836 --rc geninfo_unexecuted_blocks=1 00:19:39.836 00:19:39.836 ' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:39.836 11:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:40.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:40.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:40.777 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:40.777 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.777 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:19:40.777 00:19:40.777 --- 10.0.0.2 ping statistics --- 00:19:40.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.777 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:19:40.778 00:19:40.778 --- 10.0.0.1 ping statistics --- 00:19:40.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.778 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.778 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=239915 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 239915 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 239915 ']' 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.037 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=239944 00:19:41.296 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7c878d61c62c06dfe662360c1bdbbe31038dd2fdb6c1640e 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eMO 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7c878d61c62c06dfe662360c1bdbbe31038dd2fdb6c1640e 0 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7c878d61c62c06dfe662360c1bdbbe31038dd2fdb6c1640e 0 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7c878d61c62c06dfe662360c1bdbbe31038dd2fdb6c1640e 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eMO 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eMO 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.eMO 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f7ee27ee75e990c4fe413d6f91533da6aeee06dc35ca2faf528d7e08a21f4d88 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vxl 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f7ee27ee75e990c4fe413d6f91533da6aeee06dc35ca2faf528d7e08a21f4d88 3 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f7ee27ee75e990c4fe413d6f91533da6aeee06dc35ca2faf528d7e08a21f4d88 3 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f7ee27ee75e990c4fe413d6f91533da6aeee06dc35ca2faf528d7e08a21f4d88 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vxl 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vxl 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Vxl 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2a71372d29c06ae5800edf02cacdd614 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.AXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2a71372d29c06ae5800edf02cacdd614 1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2a71372d29c06ae5800edf02cacdd614 1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2a71372d29c06ae5800edf02cacdd614 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.AXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.AXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.AXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cff8248a20813798a7de39fd4a3092ffb62238f1a6abd65d 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IP6 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cff8248a20813798a7de39fd4a3092ffb62238f1a6abd65d 2 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cff8248a20813798a7de39fd4a3092ffb62238f1a6abd65d 2 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cff8248a20813798a7de39fd4a3092ffb62238f1a6abd65d 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IP6 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IP6 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.IP6 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=873ac37a57031e014f6c9aad674a7cef2e8e0786b75c5264 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zNe 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 873ac37a57031e014f6c9aad674a7cef2e8e0786b75c5264 2 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 873ac37a57031e014f6c9aad674a7cef2e8e0786b75c5264 2 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=873ac37a57031e014f6c9aad674a7cef2e8e0786b75c5264 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:41.297 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zNe 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zNe 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.zNe 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a4f67c1a510def6e99509d1074a7b23f 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sEP 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a4f67c1a510def6e99509d1074a7b23f 1 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a4f67c1a510def6e99509d1074a7b23f 1 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a4f67c1a510def6e99509d1074a7b23f 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:41.555 11:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sEP 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sEP 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.sEP 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d6fb8eb0c42f9e966f2802264c51956f8a92ad3bb118ae9ab3a6eeda8ce7254 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5Bl 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d6fb8eb0c42f9e966f2802264c51956f8a92ad3bb118ae9ab3a6eeda8ce7254 3 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d6fb8eb0c42f9e966f2802264c51956f8a92ad3bb118ae9ab3a6eeda8ce7254 3 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d6fb8eb0c42f9e966f2802264c51956f8a92ad3bb118ae9ab3a6eeda8ce7254 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5Bl 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5Bl 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.5Bl 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 239915 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 239915 ']' 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.555 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 239944 /var/tmp/host.sock 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 239944 ']' 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:41.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.813 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eMO 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.eMO 00:19:42.071 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.eMO 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Vxl ]] 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vxl 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vxl 00:19:42.329 11:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vxl 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AXX 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AXX 00:19:42.587 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AXX 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.IP6 ]] 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IP6 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IP6 00:19:42.845 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IP6 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zNe 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.zNe 00:19:43.103 11:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.zNe 00:19:43.364 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.sEP ]] 00:19:43.364 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sEP 00:19:43.364 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.364 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.624 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.624 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sEP 00:19:43.624 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sEP 00:19:43.884 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:43.884 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5Bl 00:19:43.884 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.885 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.885 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.885 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5Bl 00:19:43.885 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5Bl 00:19:44.144 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:44.144 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:44.144 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.144 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.144 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.144 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.404 11:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.663 00:19:44.663 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.663 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.663 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.920 { 00:19:44.920 "cntlid": 1, 00:19:44.920 "qid": 0, 00:19:44.920 "state": "enabled", 00:19:44.920 "thread": "nvmf_tgt_poll_group_000", 00:19:44.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:44.920 "listen_address": { 00:19:44.920 "trtype": "TCP", 00:19:44.920 "adrfam": "IPv4", 00:19:44.920 "traddr": "10.0.0.2", 00:19:44.920 "trsvcid": "4420" 00:19:44.920 }, 00:19:44.920 "peer_address": { 00:19:44.920 "trtype": "TCP", 00:19:44.920 "adrfam": "IPv4", 00:19:44.920 "traddr": "10.0.0.1", 00:19:44.920 "trsvcid": "37328" 00:19:44.920 }, 00:19:44.920 "auth": { 00:19:44.920 "state": "completed", 00:19:44.920 "digest": "sha256", 00:19:44.920 "dhgroup": "null" 00:19:44.920 } 00:19:44.920 } 00:19:44.920 ]' 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:44.920 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.178 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.178 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.178 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.438 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:19:45.438 11:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:50.709 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.710 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.710 11:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.710 { 00:19:50.710 "cntlid": 3, 00:19:50.710 "qid": 0, 00:19:50.710 "state": "enabled", 00:19:50.710 "thread": "nvmf_tgt_poll_group_000", 00:19:50.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:50.710 "listen_address": { 00:19:50.710 "trtype": "TCP", 00:19:50.710 "adrfam": "IPv4", 00:19:50.710 "traddr": "10.0.0.2", 00:19:50.710 "trsvcid": "4420" 00:19:50.710 }, 00:19:50.710 "peer_address": { 00:19:50.710 "trtype": "TCP", 00:19:50.710 "adrfam": "IPv4", 00:19:50.710 "traddr": "10.0.0.1", 00:19:50.710 "trsvcid": "53298" 00:19:50.710 }, 00:19:50.710 "auth": { 00:19:50.710 "state": "completed", 00:19:50.710 "digest": "sha256", 00:19:50.710 "dhgroup": "null" 00:19:50.710 } 00:19:50.710 } 00:19:50.710 ]' 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:50.710 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.969 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.969 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.969 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.228 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:19:51.228 11:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.169 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.428 11:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.687 00:19:52.687 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.687 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.687 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.945 { 00:19:52.945 "cntlid": 5, 00:19:52.945 "qid": 0, 00:19:52.945 "state": "enabled", 00:19:52.945 "thread": "nvmf_tgt_poll_group_000", 00:19:52.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:52.945 "listen_address": { 00:19:52.945 "trtype": "TCP", 00:19:52.945 "adrfam": "IPv4", 00:19:52.945 "traddr": "10.0.0.2", 00:19:52.945 "trsvcid": "4420" 00:19:52.945 }, 00:19:52.945 "peer_address": { 00:19:52.945 "trtype": "TCP", 00:19:52.945 "adrfam": "IPv4", 00:19:52.945 "traddr": "10.0.0.1", 00:19:52.945 "trsvcid": "53316" 00:19:52.945 }, 00:19:52.945 "auth": { 00:19:52.945 "state": "completed", 00:19:52.945 "digest": "sha256", 00:19:52.945 "dhgroup": "null" 00:19:52.945 } 00:19:52.945 } 00:19:52.945 ]' 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.945 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.205 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:19:53.205 11:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.143 11:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.402 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:54.971 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.971 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.230 { 00:19:55.230 "cntlid": 7, 00:19:55.230 "qid": 0, 00:19:55.230 "state": "enabled", 00:19:55.230 "thread": "nvmf_tgt_poll_group_000", 00:19:55.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:55.230 "listen_address": { 00:19:55.230 "trtype": "TCP", 00:19:55.230 "adrfam": "IPv4", 00:19:55.230 "traddr": "10.0.0.2", 00:19:55.230 "trsvcid": "4420" 00:19:55.230 }, 00:19:55.230 "peer_address": { 00:19:55.230 "trtype": "TCP", 00:19:55.230 "adrfam": "IPv4", 00:19:55.230 "traddr": "10.0.0.1", 00:19:55.230 "trsvcid": "53342" 00:19:55.230 }, 00:19:55.230 "auth": { 00:19:55.230 "state": "completed", 00:19:55.230 "digest": "sha256", 00:19:55.230 "dhgroup": "null" 00:19:55.230 } 00:19:55.230 } 00:19:55.230 ]' 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.230 11:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.488 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:19:55.488 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.427 11:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.685 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.686 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.686 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.944 00:19:56.944 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.944 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.944 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.202 { 00:19:57.202 "cntlid": 9, 00:19:57.202 "qid": 0, 00:19:57.202 "state": "enabled", 00:19:57.202 "thread": "nvmf_tgt_poll_group_000", 00:19:57.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:57.202 "listen_address": { 00:19:57.202 "trtype": "TCP", 00:19:57.202 "adrfam": "IPv4", 00:19:57.202 "traddr": "10.0.0.2", 00:19:57.202 "trsvcid": "4420" 00:19:57.202 }, 00:19:57.202 "peer_address": { 00:19:57.202 "trtype": "TCP", 00:19:57.202 "adrfam": "IPv4", 00:19:57.202 "traddr": "10.0.0.1", 00:19:57.202 "trsvcid": "53368" 00:19:57.202 }, 00:19:57.202 "auth": { 00:19:57.202 "state": "completed", 00:19:57.202 "digest": "sha256", 00:19:57.202 "dhgroup": "ffdhe2048" 00:19:57.202 } 00:19:57.202 } 00:19:57.202 ]' 00:19:57.202 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.460 11:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.719 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:19:57.719 11:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:19:58.656 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.656 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.656 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.656 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.656 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.656 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.657 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.657 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.915 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.173 00:19:59.173 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.173 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.173 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.431 { 00:19:59.431 "cntlid": 11, 00:19:59.431 "qid": 0, 00:19:59.431 "state": "enabled", 00:19:59.431 "thread": "nvmf_tgt_poll_group_000", 00:19:59.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:59.431 "listen_address": { 00:19:59.431 "trtype": "TCP", 00:19:59.431 "adrfam": "IPv4", 00:19:59.431 "traddr": "10.0.0.2", 00:19:59.431 "trsvcid": "4420" 00:19:59.431 }, 00:19:59.431 "peer_address": { 00:19:59.431 "trtype": "TCP", 00:19:59.431 "adrfam": "IPv4", 00:19:59.431 "traddr": "10.0.0.1", 00:19:59.431 "trsvcid": "34040" 00:19:59.431 }, 00:19:59.431 "auth": { 00:19:59.431 "state": "completed", 00:19:59.431 "digest": "sha256", 00:19:59.431 "dhgroup": "ffdhe2048" 00:19:59.431 } 00:19:59.431 } 00:19:59.431 ]' 00:19:59.431 11:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.431 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.431 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.431 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.431 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.692 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.692 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.692 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.951 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:19:59.951 11:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:00.886 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.887 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.456 00:20:01.456 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.456 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.456 11:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.715 { 00:20:01.715 "cntlid": 13, 00:20:01.715 "qid": 0, 00:20:01.715 "state": "enabled", 00:20:01.715 "thread": "nvmf_tgt_poll_group_000", 00:20:01.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:01.715 "listen_address": { 00:20:01.715 "trtype": "TCP", 00:20:01.715 "adrfam": "IPv4", 00:20:01.715 "traddr": "10.0.0.2", 00:20:01.715 "trsvcid": "4420" 00:20:01.715 }, 00:20:01.715 "peer_address": { 00:20:01.715 "trtype": "TCP", 00:20:01.715 "adrfam": "IPv4", 00:20:01.715 "traddr": "10.0.0.1", 00:20:01.715 "trsvcid": "34070" 00:20:01.715 }, 00:20:01.715 "auth": { 00:20:01.715 "state": "completed", 00:20:01.715 "digest": "sha256", 00:20:01.715 "dhgroup": "ffdhe2048" 00:20:01.715 } 00:20:01.715 } 00:20:01.715 ]' 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.715 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.973 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:01.974 11:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.915 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.173 11:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.432 00:20:03.432 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.432 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.432 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.691 { 00:20:03.691 "cntlid": 15, 00:20:03.691 "qid": 0, 00:20:03.691 "state": "enabled", 00:20:03.691 "thread": "nvmf_tgt_poll_group_000", 00:20:03.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.691 "listen_address": { 00:20:03.691 "trtype": "TCP", 00:20:03.691 "adrfam": "IPv4", 00:20:03.691 "traddr": "10.0.0.2", 00:20:03.691 "trsvcid": "4420" 00:20:03.691 }, 00:20:03.691 "peer_address": { 00:20:03.691 "trtype": "TCP", 00:20:03.691 "adrfam": "IPv4", 00:20:03.691 "traddr": "10.0.0.1", 00:20:03.691 "trsvcid": "34098" 00:20:03.691 }, 00:20:03.691 "auth": { 00:20:03.691 "state": "completed", 00:20:03.691 "digest": "sha256", 00:20:03.691 "dhgroup": "ffdhe2048" 00:20:03.691 } 00:20:03.691 } 00:20:03.691 ]' 00:20:03.691 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.949 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.207 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:04.207 11:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.157 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.416 11:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.674 00:20:05.674 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.674 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.674 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.244 { 00:20:06.244 "cntlid": 17, 00:20:06.244 "qid": 0, 00:20:06.244 "state": "enabled", 00:20:06.244 "thread": "nvmf_tgt_poll_group_000", 00:20:06.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:06.244 "listen_address": { 00:20:06.244 "trtype": "TCP", 00:20:06.244 "adrfam": "IPv4", 00:20:06.244 "traddr": "10.0.0.2", 00:20:06.244 "trsvcid": "4420" 00:20:06.244 }, 00:20:06.244 "peer_address": { 00:20:06.244 "trtype": "TCP", 00:20:06.244 "adrfam": "IPv4", 00:20:06.244 "traddr": "10.0.0.1", 00:20:06.244 "trsvcid": "34130" 00:20:06.244 }, 00:20:06.244 "auth": { 00:20:06.244 "state": "completed", 00:20:06.244 "digest": "sha256", 00:20:06.244 "dhgroup": "ffdhe3072" 00:20:06.244 } 00:20:06.244 } 00:20:06.244 ]' 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.244 11:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.502 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:06.502 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.439 11:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.698 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.956 00:20:07.956 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.956 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.956 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.215 { 00:20:08.215 "cntlid": 19, 00:20:08.215 "qid": 0, 00:20:08.215 "state": "enabled", 00:20:08.215 "thread": "nvmf_tgt_poll_group_000", 00:20:08.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:08.215 "listen_address": { 00:20:08.215 "trtype": "TCP", 00:20:08.215 "adrfam": "IPv4", 00:20:08.215 "traddr": "10.0.0.2", 00:20:08.215 "trsvcid": "4420" 00:20:08.215 }, 00:20:08.215 "peer_address": { 00:20:08.215 "trtype": "TCP", 00:20:08.215 "adrfam": "IPv4", 00:20:08.215 "traddr": "10.0.0.1", 00:20:08.215 "trsvcid": "34170" 00:20:08.215 }, 00:20:08.215 "auth": { 00:20:08.215 "state": "completed", 00:20:08.215 "digest": "sha256", 00:20:08.215 "dhgroup": "ffdhe3072" 00:20:08.215 } 00:20:08.215 } 00:20:08.215 ]' 00:20:08.215 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.474 11:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.732 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:08.732 11:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.673 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.932 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.933 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.192 00:20:10.192 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.192 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.192 11:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.763 { 00:20:10.763 "cntlid": 21, 00:20:10.763 "qid": 0, 00:20:10.763 "state": "enabled", 00:20:10.763 "thread": "nvmf_tgt_poll_group_000", 00:20:10.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.763 "listen_address": { 00:20:10.763 "trtype": "TCP", 00:20:10.763 "adrfam": "IPv4", 00:20:10.763 "traddr": "10.0.0.2", 00:20:10.763 "trsvcid": "4420" 00:20:10.763 }, 00:20:10.763 "peer_address": { 00:20:10.763 "trtype": "TCP", 00:20:10.763 "adrfam": "IPv4", 00:20:10.763 "traddr": "10.0.0.1", 00:20:10.763 "trsvcid": "46224" 00:20:10.763 }, 00:20:10.763 "auth": { 00:20:10.763 "state": "completed", 00:20:10.763 "digest": "sha256", 00:20:10.763 "dhgroup": "ffdhe3072" 00:20:10.763 } 00:20:10.763 } 00:20:10.763 ]' 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.763 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.022 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:11.022 11:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.963 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.222 11:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.481 00:20:12.481 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.481 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.481 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.740 { 00:20:12.740 "cntlid": 23, 00:20:12.740 "qid": 0, 00:20:12.740 "state": "enabled", 00:20:12.740 "thread": "nvmf_tgt_poll_group_000", 00:20:12.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.740 "listen_address": { 00:20:12.740 "trtype": "TCP", 00:20:12.740 "adrfam": "IPv4", 00:20:12.740 "traddr": "10.0.0.2", 00:20:12.740 "trsvcid": "4420" 00:20:12.740 }, 00:20:12.740 "peer_address": { 00:20:12.740 "trtype": "TCP", 00:20:12.740 "adrfam": "IPv4", 00:20:12.740 "traddr": "10.0.0.1", 00:20:12.740 "trsvcid": "46240" 00:20:12.740 }, 00:20:12.740 "auth": { 00:20:12.740 "state": "completed", 00:20:12.740 "digest": "sha256", 00:20:12.740 "dhgroup": "ffdhe3072" 00:20:12.740 } 00:20:12.740 } 00:20:12.740 ]' 00:20:12.740 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.999 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.257 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:13.257 11:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.195 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.454 11:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.712 00:20:14.712 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.712 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.712 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.969 { 00:20:14.969 "cntlid": 25, 00:20:14.969 "qid": 0, 00:20:14.969 "state": "enabled", 00:20:14.969 "thread": "nvmf_tgt_poll_group_000", 00:20:14.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:14.969 "listen_address": { 00:20:14.969 "trtype": "TCP", 00:20:14.969 "adrfam": "IPv4", 00:20:14.969 "traddr": "10.0.0.2", 00:20:14.969 "trsvcid": "4420" 00:20:14.969 }, 00:20:14.969 "peer_address": { 00:20:14.969 "trtype": "TCP", 00:20:14.969 "adrfam": "IPv4", 00:20:14.969 "traddr": "10.0.0.1", 00:20:14.969 "trsvcid": "46258" 00:20:14.969 }, 00:20:14.969 "auth": { 00:20:14.969 "state": "completed", 00:20:14.969 "digest": "sha256", 00:20:14.969 "dhgroup": "ffdhe4096" 00:20:14.969 } 00:20:14.969 } 00:20:14.969 ]' 00:20:14.969 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.228 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.486 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:15.486 11:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.422 11:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.681 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.252 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.252 { 00:20:17.252 "cntlid": 27, 00:20:17.252 "qid": 0, 00:20:17.252 "state": "enabled", 00:20:17.252 "thread": "nvmf_tgt_poll_group_000", 00:20:17.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.252 "listen_address": { 00:20:17.252 "trtype": "TCP", 00:20:17.252 "adrfam": "IPv4", 00:20:17.252 "traddr": "10.0.0.2", 00:20:17.252 "trsvcid": "4420" 00:20:17.252 }, 00:20:17.252 "peer_address": { 00:20:17.252 "trtype": "TCP", 00:20:17.252 "adrfam": "IPv4", 00:20:17.252 "traddr": "10.0.0.1", 00:20:17.252 "trsvcid": "46288" 00:20:17.252 }, 00:20:17.252 "auth": { 00:20:17.252 "state": "completed", 00:20:17.252 "digest": "sha256", 00:20:17.252 "dhgroup": "ffdhe4096" 00:20:17.252 } 00:20:17.252 } 00:20:17.252 ]' 00:20:17.252 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.510 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.510 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.510 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:17.510 11:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.510 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.510 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.510 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.769 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:17.769 11:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:18.706 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.965 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.223 00:20:19.223 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.223 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.223 11:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.510 { 00:20:19.510 "cntlid": 29, 00:20:19.510 "qid": 0, 00:20:19.510 "state": "enabled", 00:20:19.510 "thread": "nvmf_tgt_poll_group_000", 00:20:19.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:19.510 "listen_address": { 00:20:19.510 "trtype": "TCP", 00:20:19.510 "adrfam": "IPv4", 00:20:19.510 "traddr": "10.0.0.2", 00:20:19.510 "trsvcid": "4420" 00:20:19.510 }, 00:20:19.510 "peer_address": { 00:20:19.510 "trtype": "TCP", 00:20:19.510 "adrfam": "IPv4", 00:20:19.510 "traddr": "10.0.0.1", 00:20:19.510 "trsvcid": "37638" 00:20:19.510 }, 00:20:19.510 "auth": { 00:20:19.510 "state": "completed", 00:20:19.510 "digest": "sha256", 00:20:19.510 "dhgroup": "ffdhe4096" 00:20:19.510 } 00:20:19.510 } 00:20:19.510 ]' 00:20:19.510 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.768 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.027 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:20.027 11:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.967 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.238 11:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.498 00:20:21.498 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.499 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.499 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.759 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.759 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.759 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.759 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.759 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.759 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.759 { 00:20:21.759 "cntlid": 31, 00:20:21.759 "qid": 0, 00:20:21.759 "state": "enabled", 00:20:21.759 "thread": "nvmf_tgt_poll_group_000", 00:20:21.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.759 "listen_address": { 00:20:21.759 "trtype": "TCP", 00:20:21.759 "adrfam": "IPv4", 00:20:21.759 "traddr": "10.0.0.2", 00:20:21.759 "trsvcid": "4420" 00:20:21.759 }, 00:20:21.759 "peer_address": { 00:20:21.759 "trtype": "TCP", 00:20:21.759 "adrfam": "IPv4", 00:20:21.759 "traddr": "10.0.0.1", 00:20:21.759 "trsvcid": "37660" 00:20:21.759 }, 00:20:21.759 "auth": { 00:20:21.759 "state": "completed", 00:20:21.759 "digest": "sha256", 00:20:21.759 "dhgroup": "ffdhe4096" 00:20:21.759 } 00:20:21.759 } 00:20:21.759 ]' 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.018 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.277 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:22.278 11:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.213 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.471 11:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.041 00:20:24.041 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.041 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.041 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.300 { 00:20:24.300 "cntlid": 33, 00:20:24.300 "qid": 0, 00:20:24.300 "state": "enabled", 00:20:24.300 "thread": "nvmf_tgt_poll_group_000", 00:20:24.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:24.300 "listen_address": { 00:20:24.300 "trtype": "TCP", 00:20:24.300 "adrfam": "IPv4", 00:20:24.300 "traddr": "10.0.0.2", 00:20:24.300 "trsvcid": "4420" 00:20:24.300 }, 00:20:24.300 "peer_address": { 00:20:24.300 "trtype": "TCP", 00:20:24.300 "adrfam": "IPv4", 00:20:24.300 "traddr": "10.0.0.1", 00:20:24.300 "trsvcid": "37678" 00:20:24.300 }, 00:20:24.300 "auth": { 00:20:24.300 "state": "completed", 00:20:24.300 "digest": "sha256", 00:20:24.300 "dhgroup": "ffdhe6144" 00:20:24.300 } 00:20:24.300 } 00:20:24.300 ]' 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.300 11:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.559 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:24.559 11:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:25.496 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.755 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.322 00:20:26.322 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.322 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.322 11:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.581 { 00:20:26.581 "cntlid": 35, 00:20:26.581 "qid": 0, 00:20:26.581 "state": "enabled", 00:20:26.581 "thread": "nvmf_tgt_poll_group_000", 00:20:26.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:26.581 "listen_address": { 00:20:26.581 "trtype": "TCP", 00:20:26.581 "adrfam": "IPv4", 00:20:26.581 "traddr": "10.0.0.2", 00:20:26.581 "trsvcid": "4420" 00:20:26.581 }, 00:20:26.581 "peer_address": { 00:20:26.581 "trtype": "TCP", 00:20:26.581 "adrfam": "IPv4", 00:20:26.581 "traddr": "10.0.0.1", 00:20:26.581 "trsvcid": "37706" 00:20:26.581 }, 00:20:26.581 "auth": { 00:20:26.581 "state": "completed", 00:20:26.581 "digest": "sha256", 00:20:26.581 "dhgroup": "ffdhe6144" 00:20:26.581 } 00:20:26.581 } 00:20:26.581 ]' 00:20:26.581 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.839 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.839 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.839 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.839 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.839 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.839 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.840 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.098 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:27.098 11:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.041 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.300 11:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.870 00:20:28.870 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.870 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.870 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.129 { 00:20:29.129 "cntlid": 37, 00:20:29.129 "qid": 0, 00:20:29.129 "state": "enabled", 00:20:29.129 "thread": "nvmf_tgt_poll_group_000", 00:20:29.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.129 "listen_address": { 00:20:29.129 "trtype": "TCP", 00:20:29.129 "adrfam": "IPv4", 00:20:29.129 "traddr": "10.0.0.2", 00:20:29.129 "trsvcid": "4420" 00:20:29.129 }, 00:20:29.129 "peer_address": { 00:20:29.129 "trtype": "TCP", 00:20:29.129 "adrfam": "IPv4", 00:20:29.129 "traddr": "10.0.0.1", 00:20:29.129 "trsvcid": "37736" 00:20:29.129 }, 00:20:29.129 "auth": { 00:20:29.129 "state": "completed", 00:20:29.129 "digest": "sha256", 00:20:29.129 "dhgroup": "ffdhe6144" 00:20:29.129 } 00:20:29.129 } 00:20:29.129 ]' 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.129 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.130 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.130 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.388 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.388 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.388 11:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.649 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:29.649 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.588 11:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.588 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.155 00:20:31.155 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.155 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.155 11:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.414 { 00:20:31.414 "cntlid": 39, 00:20:31.414 "qid": 0, 00:20:31.414 "state": "enabled", 00:20:31.414 "thread": "nvmf_tgt_poll_group_000", 00:20:31.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.414 "listen_address": { 00:20:31.414 "trtype": "TCP", 00:20:31.414 "adrfam": "IPv4", 00:20:31.414 "traddr": "10.0.0.2", 00:20:31.414 "trsvcid": "4420" 00:20:31.414 }, 00:20:31.414 "peer_address": { 00:20:31.414 "trtype": "TCP", 00:20:31.414 "adrfam": "IPv4", 00:20:31.414 "traddr": "10.0.0.1", 00:20:31.414 "trsvcid": "35626" 00:20:31.414 }, 00:20:31.414 "auth": { 00:20:31.414 "state": "completed", 00:20:31.414 "digest": "sha256", 00:20:31.414 "dhgroup": "ffdhe6144" 00:20:31.414 } 00:20:31.414 } 00:20:31.414 ]' 00:20:31.414 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.695 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.977 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:31.977 11:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.970 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.256 11:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.849 00:20:33.849 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.849 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.849 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.107 { 00:20:34.107 "cntlid": 41, 00:20:34.107 "qid": 0, 00:20:34.107 "state": "enabled", 00:20:34.107 "thread": "nvmf_tgt_poll_group_000", 00:20:34.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.107 "listen_address": { 00:20:34.107 "trtype": "TCP", 00:20:34.107 "adrfam": "IPv4", 00:20:34.107 "traddr": "10.0.0.2", 00:20:34.107 "trsvcid": "4420" 00:20:34.107 }, 00:20:34.107 "peer_address": { 00:20:34.107 "trtype": "TCP", 00:20:34.107 "adrfam": "IPv4", 00:20:34.107 "traddr": "10.0.0.1", 00:20:34.107 "trsvcid": "35650" 00:20:34.107 }, 00:20:34.107 "auth": { 00:20:34.107 "state": "completed", 00:20:34.107 "digest": "sha256", 00:20:34.107 "dhgroup": "ffdhe8192" 00:20:34.107 } 00:20:34.107 } 00:20:34.107 ]' 00:20:34.107 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.365 11:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.623 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:34.623 11:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.557 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 11:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.749 00:20:36.749 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.749 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.749 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.009 { 00:20:37.009 "cntlid": 43, 00:20:37.009 "qid": 0, 00:20:37.009 "state": "enabled", 00:20:37.009 "thread": "nvmf_tgt_poll_group_000", 00:20:37.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:37.009 "listen_address": { 00:20:37.009 "trtype": "TCP", 00:20:37.009 "adrfam": "IPv4", 00:20:37.009 "traddr": "10.0.0.2", 00:20:37.009 "trsvcid": "4420" 00:20:37.009 }, 00:20:37.009 "peer_address": { 00:20:37.009 "trtype": "TCP", 00:20:37.009 "adrfam": "IPv4", 00:20:37.009 "traddr": "10.0.0.1", 00:20:37.009 "trsvcid": "35672" 00:20:37.009 }, 00:20:37.009 "auth": { 00:20:37.009 "state": "completed", 00:20:37.009 "digest": "sha256", 00:20:37.009 "dhgroup": "ffdhe8192" 00:20:37.009 } 00:20:37.009 } 00:20:37.009 ]' 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.009 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.267 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:37.267 11:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.204 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.462 11:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.462 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.462 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.462 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.462 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.396 00:20:39.396 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.396 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.396 11:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.654 { 00:20:39.654 "cntlid": 45, 00:20:39.654 "qid": 0, 00:20:39.654 "state": "enabled", 00:20:39.654 "thread": "nvmf_tgt_poll_group_000", 00:20:39.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:39.654 "listen_address": { 00:20:39.654 "trtype": "TCP", 00:20:39.654 "adrfam": "IPv4", 00:20:39.654 "traddr": "10.0.0.2", 00:20:39.654 "trsvcid": "4420" 00:20:39.654 }, 00:20:39.654 "peer_address": { 00:20:39.654 "trtype": "TCP", 00:20:39.654 "adrfam": "IPv4", 00:20:39.654 "traddr": "10.0.0.1", 00:20:39.654 "trsvcid": "35696" 00:20:39.654 }, 00:20:39.654 "auth": { 00:20:39.654 "state": "completed", 00:20:39.654 "digest": "sha256", 00:20:39.654 "dhgroup": "ffdhe8192" 00:20:39.654 } 00:20:39.654 } 00:20:39.654 ]' 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.654 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.914 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:39.914 11:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:40.850 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.419 11:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.986 00:20:41.986 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.986 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.986 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.244 { 00:20:42.244 "cntlid": 47, 00:20:42.244 "qid": 0, 00:20:42.244 "state": "enabled", 00:20:42.244 "thread": "nvmf_tgt_poll_group_000", 00:20:42.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:42.244 "listen_address": { 00:20:42.244 "trtype": "TCP", 00:20:42.244 "adrfam": "IPv4", 00:20:42.244 "traddr": "10.0.0.2", 00:20:42.244 "trsvcid": "4420" 00:20:42.244 }, 00:20:42.244 "peer_address": { 00:20:42.244 "trtype": "TCP", 00:20:42.244 "adrfam": "IPv4", 00:20:42.244 "traddr": "10.0.0.1", 00:20:42.244 "trsvcid": "33084" 00:20:42.244 }, 00:20:42.244 "auth": { 00:20:42.244 "state": "completed", 00:20:42.244 "digest": "sha256", 00:20:42.244 "dhgroup": "ffdhe8192" 00:20:42.244 } 00:20:42.244 } 00:20:42.244 ]' 00:20:42.244 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.503 11:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.761 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:42.761 11:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.696 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.955 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.213 00:20:44.213 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.213 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.213 11:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.470 { 00:20:44.470 "cntlid": 49, 00:20:44.470 "qid": 0, 00:20:44.470 "state": "enabled", 00:20:44.470 "thread": "nvmf_tgt_poll_group_000", 00:20:44.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:44.470 "listen_address": { 00:20:44.470 "trtype": "TCP", 00:20:44.470 "adrfam": "IPv4", 00:20:44.470 "traddr": "10.0.0.2", 00:20:44.470 "trsvcid": "4420" 00:20:44.470 }, 00:20:44.470 "peer_address": { 00:20:44.470 "trtype": "TCP", 00:20:44.470 "adrfam": "IPv4", 00:20:44.470 "traddr": "10.0.0.1", 00:20:44.470 "trsvcid": "33094" 00:20:44.470 }, 00:20:44.470 "auth": { 00:20:44.470 "state": "completed", 00:20:44.470 "digest": "sha384", 00:20:44.470 "dhgroup": "null" 00:20:44.470 } 00:20:44.470 } 00:20:44.470 ]' 00:20:44.470 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.736 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.994 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:44.994 11:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.928 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.187 11:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.445 00:20:46.445 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.445 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.445 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.703 { 00:20:46.703 "cntlid": 51, 00:20:46.703 "qid": 0, 00:20:46.703 "state": "enabled", 00:20:46.703 "thread": "nvmf_tgt_poll_group_000", 00:20:46.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:46.703 "listen_address": { 00:20:46.703 "trtype": "TCP", 00:20:46.703 "adrfam": "IPv4", 00:20:46.703 "traddr": "10.0.0.2", 00:20:46.703 "trsvcid": "4420" 00:20:46.703 }, 00:20:46.703 "peer_address": { 00:20:46.703 "trtype": "TCP", 00:20:46.703 "adrfam": "IPv4", 00:20:46.703 "traddr": "10.0.0.1", 00:20:46.703 "trsvcid": "33116" 00:20:46.703 }, 00:20:46.703 "auth": { 00:20:46.703 "state": "completed", 00:20:46.703 "digest": "sha384", 00:20:46.703 "dhgroup": "null" 00:20:46.703 } 00:20:46.703 } 00:20:46.703 ]' 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.703 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.961 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.961 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.961 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.961 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.961 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.218 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:47.219 11:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.154 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.412 11:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.671 00:20:48.671 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.671 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.671 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.928 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.928 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.928 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.928 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.928 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.928 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.928 { 00:20:48.928 "cntlid": 53, 00:20:48.929 "qid": 0, 00:20:48.929 "state": "enabled", 00:20:48.929 "thread": "nvmf_tgt_poll_group_000", 00:20:48.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:48.929 "listen_address": { 00:20:48.929 "trtype": "TCP", 00:20:48.929 "adrfam": "IPv4", 00:20:48.929 "traddr": "10.0.0.2", 00:20:48.929 "trsvcid": "4420" 00:20:48.929 }, 00:20:48.929 "peer_address": { 00:20:48.929 "trtype": "TCP", 00:20:48.929 "adrfam": "IPv4", 00:20:48.929 "traddr": "10.0.0.1", 00:20:48.929 "trsvcid": "33144" 00:20:48.929 }, 00:20:48.929 "auth": { 00:20:48.929 "state": "completed", 00:20:48.929 "digest": "sha384", 00:20:48.929 "dhgroup": "null" 00:20:48.929 } 00:20:48.929 } 00:20:48.929 ]' 00:20:48.929 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.929 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.929 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.929 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:48.929 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.189 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.189 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.189 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.447 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:49.447 11:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.383 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.384 11:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.642 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.900 00:20:50.900 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.900 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.900 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.158 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.158 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.158 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.158 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.158 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.159 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.159 { 00:20:51.159 "cntlid": 55, 00:20:51.159 "qid": 0, 00:20:51.159 "state": "enabled", 00:20:51.159 "thread": "nvmf_tgt_poll_group_000", 00:20:51.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:51.159 "listen_address": { 00:20:51.159 "trtype": "TCP", 00:20:51.159 "adrfam": "IPv4", 00:20:51.159 "traddr": "10.0.0.2", 00:20:51.159 "trsvcid": "4420" 00:20:51.159 }, 00:20:51.159 "peer_address": { 00:20:51.159 "trtype": "TCP", 00:20:51.159 "adrfam": "IPv4", 00:20:51.159 "traddr": "10.0.0.1", 00:20:51.159 "trsvcid": "45744" 00:20:51.159 }, 00:20:51.159 "auth": { 00:20:51.159 "state": "completed", 00:20:51.159 "digest": "sha384", 00:20:51.159 "dhgroup": "null" 00:20:51.159 } 00:20:51.159 } 00:20:51.159 ]' 00:20:51.159 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.159 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.159 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.417 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.417 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.417 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.417 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.417 11:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.675 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:51.675 11:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.612 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.871 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.130 00:20:53.130 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.130 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.130 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.388 { 00:20:53.388 "cntlid": 57, 00:20:53.388 "qid": 0, 00:20:53.388 "state": "enabled", 00:20:53.388 "thread": "nvmf_tgt_poll_group_000", 00:20:53.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:53.388 "listen_address": { 00:20:53.388 "trtype": "TCP", 00:20:53.388 "adrfam": "IPv4", 00:20:53.388 "traddr": "10.0.0.2", 00:20:53.388 "trsvcid": "4420" 00:20:53.388 }, 00:20:53.388 "peer_address": { 00:20:53.388 "trtype": "TCP", 00:20:53.388 "adrfam": "IPv4", 00:20:53.388 "traddr": "10.0.0.1", 00:20:53.388 "trsvcid": "45774" 00:20:53.388 }, 00:20:53.388 "auth": { 00:20:53.388 "state": "completed", 00:20:53.388 "digest": "sha384", 00:20:53.388 "dhgroup": "ffdhe2048" 00:20:53.388 } 00:20:53.388 } 00:20:53.388 ]' 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.388 11:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.388 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:53.388 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.646 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.646 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.646 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.905 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:53.905 11:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.841 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.098 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.099 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.356 00:20:55.356 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.356 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.356 11:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.615 { 00:20:55.615 "cntlid": 59, 00:20:55.615 "qid": 0, 00:20:55.615 "state": "enabled", 00:20:55.615 "thread": "nvmf_tgt_poll_group_000", 00:20:55.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:55.615 "listen_address": { 00:20:55.615 "trtype": "TCP", 00:20:55.615 "adrfam": "IPv4", 00:20:55.615 "traddr": "10.0.0.2", 00:20:55.615 "trsvcid": "4420" 00:20:55.615 }, 00:20:55.615 "peer_address": { 00:20:55.615 "trtype": "TCP", 00:20:55.615 "adrfam": "IPv4", 00:20:55.615 "traddr": "10.0.0.1", 00:20:55.615 "trsvcid": "45804" 00:20:55.615 }, 00:20:55.615 "auth": { 00:20:55.615 "state": "completed", 00:20:55.615 "digest": "sha384", 00:20:55.615 "dhgroup": "ffdhe2048" 00:20:55.615 } 00:20:55.615 } 00:20:55.615 ]' 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.615 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.874 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.874 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.874 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.132 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:56.132 11:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.066 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.324 11:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.582 00:20:57.582 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.582 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.582 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.840 { 00:20:57.840 "cntlid": 61, 00:20:57.840 "qid": 0, 00:20:57.840 "state": "enabled", 00:20:57.840 "thread": "nvmf_tgt_poll_group_000", 00:20:57.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.840 "listen_address": { 00:20:57.840 "trtype": "TCP", 00:20:57.840 "adrfam": "IPv4", 00:20:57.840 "traddr": "10.0.0.2", 00:20:57.840 "trsvcid": "4420" 00:20:57.840 }, 00:20:57.840 "peer_address": { 00:20:57.840 "trtype": "TCP", 00:20:57.840 "adrfam": "IPv4", 00:20:57.840 "traddr": "10.0.0.1", 00:20:57.840 "trsvcid": "45832" 00:20:57.840 }, 00:20:57.840 "auth": { 00:20:57.840 "state": "completed", 00:20:57.840 "digest": "sha384", 00:20:57.840 "dhgroup": "ffdhe2048" 00:20:57.840 } 00:20:57.840 } 00:20:57.840 ]' 00:20:57.840 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.098 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.356 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:58.356 11:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:20:59.300 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.301 11:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.558 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.816 00:20:59.816 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.816 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.816 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.074 { 00:21:00.074 "cntlid": 63, 00:21:00.074 "qid": 0, 00:21:00.074 "state": "enabled", 00:21:00.074 "thread": "nvmf_tgt_poll_group_000", 00:21:00.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:00.074 "listen_address": { 00:21:00.074 "trtype": "TCP", 00:21:00.074 "adrfam": "IPv4", 00:21:00.074 "traddr": "10.0.0.2", 00:21:00.074 "trsvcid": "4420" 00:21:00.074 }, 00:21:00.074 "peer_address": { 00:21:00.074 "trtype": "TCP", 00:21:00.074 "adrfam": "IPv4", 00:21:00.074 "traddr": "10.0.0.1", 00:21:00.074 "trsvcid": "38482" 00:21:00.074 }, 00:21:00.074 "auth": { 00:21:00.074 "state": "completed", 00:21:00.074 "digest": "sha384", 00:21:00.074 "dhgroup": "ffdhe2048" 00:21:00.074 } 00:21:00.074 } 00:21:00.074 ]' 00:21:00.074 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.332 11:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.591 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:00.591 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.528 11:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.786 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.045 00:21:02.045 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.045 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.045 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.303 { 00:21:02.303 "cntlid": 65, 00:21:02.303 "qid": 0, 00:21:02.303 "state": "enabled", 00:21:02.303 "thread": "nvmf_tgt_poll_group_000", 00:21:02.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.303 "listen_address": { 00:21:02.303 "trtype": "TCP", 00:21:02.303 "adrfam": "IPv4", 00:21:02.303 "traddr": "10.0.0.2", 00:21:02.303 "trsvcid": "4420" 00:21:02.303 }, 00:21:02.303 "peer_address": { 00:21:02.303 "trtype": "TCP", 00:21:02.303 "adrfam": "IPv4", 00:21:02.303 "traddr": "10.0.0.1", 00:21:02.303 "trsvcid": "38530" 00:21:02.303 }, 00:21:02.303 "auth": { 00:21:02.303 "state": "completed", 00:21:02.303 "digest": "sha384", 00:21:02.303 "dhgroup": "ffdhe3072" 00:21:02.303 } 00:21:02.303 } 00:21:02.303 ]' 00:21:02.303 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.304 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.304 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.563 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.563 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.563 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.563 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.563 11:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.822 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:02.822 11:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.760 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.018 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.019 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.019 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.276 00:21:04.276 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.276 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.276 11:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.535 { 00:21:04.535 "cntlid": 67, 00:21:04.535 "qid": 0, 00:21:04.535 "state": "enabled", 00:21:04.535 "thread": "nvmf_tgt_poll_group_000", 00:21:04.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.535 "listen_address": { 00:21:04.535 "trtype": "TCP", 00:21:04.535 "adrfam": "IPv4", 00:21:04.535 "traddr": "10.0.0.2", 00:21:04.535 "trsvcid": "4420" 00:21:04.535 }, 00:21:04.535 "peer_address": { 00:21:04.535 "trtype": "TCP", 00:21:04.535 "adrfam": "IPv4", 00:21:04.535 "traddr": "10.0.0.1", 00:21:04.535 "trsvcid": "38560" 00:21:04.535 }, 00:21:04.535 "auth": { 00:21:04.535 "state": "completed", 00:21:04.535 "digest": "sha384", 00:21:04.535 "dhgroup": "ffdhe3072" 00:21:04.535 } 00:21:04.535 } 00:21:04.535 ]' 00:21:04.535 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.793 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.051 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:05.051 11:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.991 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.249 11:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.507 00:21:06.507 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.507 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.507 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.766 { 00:21:06.766 "cntlid": 69, 00:21:06.766 "qid": 0, 00:21:06.766 "state": "enabled", 00:21:06.766 "thread": "nvmf_tgt_poll_group_000", 00:21:06.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.766 "listen_address": { 00:21:06.766 "trtype": "TCP", 00:21:06.766 "adrfam": "IPv4", 00:21:06.766 "traddr": "10.0.0.2", 00:21:06.766 "trsvcid": "4420" 00:21:06.766 }, 00:21:06.766 "peer_address": { 00:21:06.766 "trtype": "TCP", 00:21:06.766 "adrfam": "IPv4", 00:21:06.766 "traddr": "10.0.0.1", 00:21:06.766 "trsvcid": "38594" 00:21:06.766 }, 00:21:06.766 "auth": { 00:21:06.766 "state": "completed", 00:21:06.766 "digest": "sha384", 00:21:06.766 "dhgroup": "ffdhe3072" 00:21:06.766 } 00:21:06.766 } 00:21:06.766 ]' 00:21:06.766 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.024 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.282 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:07.282 11:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.219 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.477 11:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.735 00:21:08.735 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.735 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.735 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.993 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.993 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.993 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.993 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.993 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.993 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.993 { 00:21:08.993 "cntlid": 71, 00:21:08.993 "qid": 0, 00:21:08.993 "state": "enabled", 00:21:08.993 "thread": "nvmf_tgt_poll_group_000", 00:21:08.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:08.993 "listen_address": { 00:21:08.993 "trtype": "TCP", 00:21:08.993 "adrfam": "IPv4", 00:21:08.993 "traddr": "10.0.0.2", 00:21:08.993 "trsvcid": "4420" 00:21:08.993 }, 00:21:08.993 "peer_address": { 00:21:08.993 "trtype": "TCP", 00:21:08.993 "adrfam": "IPv4", 00:21:08.993 "traddr": "10.0.0.1", 00:21:08.993 "trsvcid": "38618" 00:21:08.993 }, 00:21:08.993 "auth": { 00:21:08.993 "state": "completed", 00:21:08.993 "digest": "sha384", 00:21:08.993 "dhgroup": "ffdhe3072" 00:21:08.993 } 00:21:08.993 } 00:21:08.993 ]' 00:21:08.994 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.251 11:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.509 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:09.509 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.445 11:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.703 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.961 00:21:10.961 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.961 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.961 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.527 { 00:21:11.527 "cntlid": 73, 00:21:11.527 "qid": 0, 00:21:11.527 "state": "enabled", 00:21:11.527 "thread": "nvmf_tgt_poll_group_000", 00:21:11.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.527 "listen_address": { 00:21:11.527 "trtype": "TCP", 00:21:11.527 "adrfam": "IPv4", 00:21:11.527 "traddr": "10.0.0.2", 00:21:11.527 "trsvcid": "4420" 00:21:11.527 }, 00:21:11.527 "peer_address": { 00:21:11.527 "trtype": "TCP", 00:21:11.527 "adrfam": "IPv4", 00:21:11.527 "traddr": "10.0.0.1", 00:21:11.527 "trsvcid": "51854" 00:21:11.527 }, 00:21:11.527 "auth": { 00:21:11.527 "state": "completed", 00:21:11.527 "digest": "sha384", 00:21:11.527 "dhgroup": "ffdhe4096" 00:21:11.527 } 00:21:11.527 } 00:21:11.527 ]' 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.527 11:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.527 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.527 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.527 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.784 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:11.784 11:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:12.722 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.722 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.722 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.723 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.723 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.723 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.723 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.723 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.981 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.239 00:21:13.239 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.239 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.239 11:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.498 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.498 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.498 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.498 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.757 { 00:21:13.757 "cntlid": 75, 00:21:13.757 "qid": 0, 00:21:13.757 "state": "enabled", 00:21:13.757 "thread": "nvmf_tgt_poll_group_000", 00:21:13.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:13.757 "listen_address": { 00:21:13.757 "trtype": "TCP", 00:21:13.757 "adrfam": "IPv4", 00:21:13.757 "traddr": "10.0.0.2", 00:21:13.757 "trsvcid": "4420" 00:21:13.757 }, 00:21:13.757 "peer_address": { 00:21:13.757 "trtype": "TCP", 00:21:13.757 "adrfam": "IPv4", 00:21:13.757 "traddr": "10.0.0.1", 00:21:13.757 "trsvcid": "51872" 00:21:13.757 }, 00:21:13.757 "auth": { 00:21:13.757 "state": "completed", 00:21:13.757 "digest": "sha384", 00:21:13.757 "dhgroup": "ffdhe4096" 00:21:13.757 } 00:21:13.757 } 00:21:13.757 ]' 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.757 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.015 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:14.015 11:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.954 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.212 11:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.778 00:21:15.779 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.779 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.779 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.036 { 00:21:16.036 "cntlid": 77, 00:21:16.036 "qid": 0, 00:21:16.036 "state": "enabled", 00:21:16.036 "thread": "nvmf_tgt_poll_group_000", 00:21:16.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.036 "listen_address": { 00:21:16.036 "trtype": "TCP", 00:21:16.036 "adrfam": "IPv4", 00:21:16.036 "traddr": "10.0.0.2", 00:21:16.036 "trsvcid": "4420" 00:21:16.036 }, 00:21:16.036 "peer_address": { 00:21:16.036 "trtype": "TCP", 00:21:16.036 "adrfam": "IPv4", 00:21:16.036 "traddr": "10.0.0.1", 00:21:16.036 "trsvcid": "51892" 00:21:16.036 }, 00:21:16.036 "auth": { 00:21:16.036 "state": "completed", 00:21:16.036 "digest": "sha384", 00:21:16.036 "dhgroup": "ffdhe4096" 00:21:16.036 } 00:21:16.036 } 00:21:16.036 ]' 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.036 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.294 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:16.294 11:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.232 11:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.490 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.056 00:21:18.056 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.056 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.056 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.314 { 00:21:18.314 "cntlid": 79, 00:21:18.314 "qid": 0, 00:21:18.314 "state": "enabled", 00:21:18.314 "thread": "nvmf_tgt_poll_group_000", 00:21:18.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.314 "listen_address": { 00:21:18.314 "trtype": "TCP", 00:21:18.314 "adrfam": "IPv4", 00:21:18.314 "traddr": "10.0.0.2", 00:21:18.314 "trsvcid": "4420" 00:21:18.314 }, 00:21:18.314 "peer_address": { 00:21:18.314 "trtype": "TCP", 00:21:18.314 "adrfam": "IPv4", 00:21:18.314 "traddr": "10.0.0.1", 00:21:18.314 "trsvcid": "51918" 00:21:18.314 }, 00:21:18.314 "auth": { 00:21:18.314 "state": "completed", 00:21:18.314 "digest": "sha384", 00:21:18.314 "dhgroup": "ffdhe4096" 00:21:18.314 } 00:21:18.314 } 00:21:18.314 ]' 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.314 11:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.573 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:18.573 11:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.509 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.769 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.334 00:21:20.334 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.334 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.334 11:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.591 { 00:21:20.591 "cntlid": 81, 00:21:20.591 "qid": 0, 00:21:20.591 "state": "enabled", 00:21:20.591 "thread": "nvmf_tgt_poll_group_000", 00:21:20.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:20.591 "listen_address": { 00:21:20.591 "trtype": "TCP", 00:21:20.591 "adrfam": "IPv4", 00:21:20.591 "traddr": "10.0.0.2", 00:21:20.591 "trsvcid": "4420" 00:21:20.591 }, 00:21:20.591 "peer_address": { 00:21:20.591 "trtype": "TCP", 00:21:20.591 "adrfam": "IPv4", 00:21:20.591 "traddr": "10.0.0.1", 00:21:20.591 "trsvcid": "35686" 00:21:20.591 }, 00:21:20.591 "auth": { 00:21:20.591 "state": "completed", 00:21:20.591 "digest": "sha384", 00:21:20.591 "dhgroup": "ffdhe6144" 00:21:20.591 } 00:21:20.591 } 00:21:20.591 ]' 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.591 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.850 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.850 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.850 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.108 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:21.108 11:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.049 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.307 11:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.875 00:21:22.875 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.876 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.876 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.135 { 00:21:23.135 "cntlid": 83, 00:21:23.135 "qid": 0, 00:21:23.135 "state": "enabled", 00:21:23.135 "thread": "nvmf_tgt_poll_group_000", 00:21:23.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.135 "listen_address": { 00:21:23.135 "trtype": "TCP", 00:21:23.135 "adrfam": "IPv4", 00:21:23.135 "traddr": "10.0.0.2", 00:21:23.135 "trsvcid": "4420" 00:21:23.135 }, 00:21:23.135 "peer_address": { 00:21:23.135 "trtype": "TCP", 00:21:23.135 "adrfam": "IPv4", 00:21:23.135 "traddr": "10.0.0.1", 00:21:23.135 "trsvcid": "35704" 00:21:23.135 }, 00:21:23.135 "auth": { 00:21:23.135 "state": "completed", 00:21:23.135 "digest": "sha384", 00:21:23.135 "dhgroup": "ffdhe6144" 00:21:23.135 } 00:21:23.135 } 00:21:23.135 ]' 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.135 11:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.394 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:23.394 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:24.330 11:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.589 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.156 00:21:25.156 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.156 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.156 11:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.414 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.414 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.415 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.415 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.415 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.415 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.415 { 00:21:25.415 "cntlid": 85, 00:21:25.415 "qid": 0, 00:21:25.415 "state": "enabled", 00:21:25.415 "thread": "nvmf_tgt_poll_group_000", 00:21:25.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.415 "listen_address": { 00:21:25.415 "trtype": "TCP", 00:21:25.415 "adrfam": "IPv4", 00:21:25.415 "traddr": "10.0.0.2", 00:21:25.415 "trsvcid": "4420" 00:21:25.415 }, 00:21:25.415 "peer_address": { 00:21:25.415 "trtype": "TCP", 00:21:25.415 "adrfam": "IPv4", 00:21:25.415 "traddr": "10.0.0.1", 00:21:25.415 "trsvcid": "35724" 00:21:25.415 }, 00:21:25.415 "auth": { 00:21:25.415 "state": "completed", 00:21:25.415 "digest": "sha384", 00:21:25.415 "dhgroup": "ffdhe6144" 00:21:25.415 } 00:21:25.415 } 00:21:25.415 ]' 00:21:25.415 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.673 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.674 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.674 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.674 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.674 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.674 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.674 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.932 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:25.932 11:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.867 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.125 11:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.694 00:21:27.694 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.694 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.694 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.952 { 00:21:27.952 "cntlid": 87, 00:21:27.952 "qid": 0, 00:21:27.952 "state": "enabled", 00:21:27.952 "thread": "nvmf_tgt_poll_group_000", 00:21:27.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.952 "listen_address": { 00:21:27.952 "trtype": "TCP", 00:21:27.952 "adrfam": "IPv4", 00:21:27.952 "traddr": "10.0.0.2", 00:21:27.952 "trsvcid": "4420" 00:21:27.952 }, 00:21:27.952 "peer_address": { 00:21:27.952 "trtype": "TCP", 00:21:27.952 "adrfam": "IPv4", 00:21:27.952 "traddr": "10.0.0.1", 00:21:27.952 "trsvcid": "35752" 00:21:27.952 }, 00:21:27.952 "auth": { 00:21:27.952 "state": "completed", 00:21:27.952 "digest": "sha384", 00:21:27.952 "dhgroup": "ffdhe6144" 00:21:27.952 } 00:21:27.952 } 00:21:27.952 ]' 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.952 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.953 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.953 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.953 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.953 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.953 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.212 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:28.212 11:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.149 11:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.408 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.346 00:21:30.346 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.346 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.346 11:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.604 { 00:21:30.604 "cntlid": 89, 00:21:30.604 "qid": 0, 00:21:30.604 "state": "enabled", 00:21:30.604 "thread": "nvmf_tgt_poll_group_000", 00:21:30.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.604 "listen_address": { 00:21:30.604 "trtype": "TCP", 00:21:30.604 "adrfam": "IPv4", 00:21:30.604 "traddr": "10.0.0.2", 00:21:30.604 "trsvcid": "4420" 00:21:30.604 }, 00:21:30.604 "peer_address": { 00:21:30.604 "trtype": "TCP", 00:21:30.604 "adrfam": "IPv4", 00:21:30.604 "traddr": "10.0.0.1", 00:21:30.604 "trsvcid": "37108" 00:21:30.604 }, 00:21:30.604 "auth": { 00:21:30.604 "state": "completed", 00:21:30.604 "digest": "sha384", 00:21:30.604 "dhgroup": "ffdhe8192" 00:21:30.604 } 00:21:30.604 } 00:21:30.604 ]' 00:21:30.604 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.863 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.121 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:31.121 11:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.058 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.319 11:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.255 00:21:33.255 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.255 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.255 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.513 { 00:21:33.513 "cntlid": 91, 00:21:33.513 "qid": 0, 00:21:33.513 "state": "enabled", 00:21:33.513 "thread": "nvmf_tgt_poll_group_000", 00:21:33.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.513 "listen_address": { 00:21:33.513 "trtype": "TCP", 00:21:33.513 "adrfam": "IPv4", 00:21:33.513 "traddr": "10.0.0.2", 00:21:33.513 "trsvcid": "4420" 00:21:33.513 }, 00:21:33.513 "peer_address": { 00:21:33.513 "trtype": "TCP", 00:21:33.513 "adrfam": "IPv4", 00:21:33.513 "traddr": "10.0.0.1", 00:21:33.513 "trsvcid": "37126" 00:21:33.513 }, 00:21:33.513 "auth": { 00:21:33.513 "state": "completed", 00:21:33.513 "digest": "sha384", 00:21:33.513 "dhgroup": "ffdhe8192" 00:21:33.513 } 00:21:33.513 } 00:21:33.513 ]' 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.513 11:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.513 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.513 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.513 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.513 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.513 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.772 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:33.772 11:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.708 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.966 11:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.901 00:21:35.901 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.901 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.901 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.159 { 00:21:36.159 "cntlid": 93, 00:21:36.159 "qid": 0, 00:21:36.159 "state": "enabled", 00:21:36.159 "thread": "nvmf_tgt_poll_group_000", 00:21:36.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.159 "listen_address": { 00:21:36.159 "trtype": "TCP", 00:21:36.159 "adrfam": "IPv4", 00:21:36.159 "traddr": "10.0.0.2", 00:21:36.159 "trsvcid": "4420" 00:21:36.159 }, 00:21:36.159 "peer_address": { 00:21:36.159 "trtype": "TCP", 00:21:36.159 "adrfam": "IPv4", 00:21:36.159 "traddr": "10.0.0.1", 00:21:36.159 "trsvcid": "37144" 00:21:36.159 }, 00:21:36.159 "auth": { 00:21:36.159 "state": "completed", 00:21:36.159 "digest": "sha384", 00:21:36.159 "dhgroup": "ffdhe8192" 00:21:36.159 } 00:21:36.159 } 00:21:36.159 ]' 00:21:36.159 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.418 11:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.676 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:36.676 11:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.613 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.871 11:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:38.808 00:21:38.808 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.808 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.808 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.066 { 00:21:39.066 "cntlid": 95, 00:21:39.066 "qid": 0, 00:21:39.066 "state": "enabled", 00:21:39.066 "thread": "nvmf_tgt_poll_group_000", 00:21:39.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.066 "listen_address": { 00:21:39.066 "trtype": "TCP", 00:21:39.066 "adrfam": "IPv4", 00:21:39.066 "traddr": "10.0.0.2", 00:21:39.066 "trsvcid": "4420" 00:21:39.066 }, 00:21:39.066 "peer_address": { 00:21:39.066 "trtype": "TCP", 00:21:39.066 "adrfam": "IPv4", 00:21:39.066 "traddr": "10.0.0.1", 00:21:39.066 "trsvcid": "37174" 00:21:39.066 }, 00:21:39.066 "auth": { 00:21:39.066 "state": "completed", 00:21:39.066 "digest": "sha384", 00:21:39.066 "dhgroup": "ffdhe8192" 00:21:39.066 } 00:21:39.066 } 00:21:39.066 ]' 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.066 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.325 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:39.325 11:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.263 11:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.521 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.779 00:21:40.779 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.779 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.779 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.038 { 00:21:41.038 "cntlid": 97, 00:21:41.038 "qid": 0, 00:21:41.038 "state": "enabled", 00:21:41.038 "thread": "nvmf_tgt_poll_group_000", 00:21:41.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.038 "listen_address": { 00:21:41.038 "trtype": "TCP", 00:21:41.038 "adrfam": "IPv4", 00:21:41.038 "traddr": "10.0.0.2", 00:21:41.038 "trsvcid": "4420" 00:21:41.038 }, 00:21:41.038 "peer_address": { 00:21:41.038 "trtype": "TCP", 00:21:41.038 "adrfam": "IPv4", 00:21:41.038 "traddr": "10.0.0.1", 00:21:41.038 "trsvcid": "38260" 00:21:41.038 }, 00:21:41.038 "auth": { 00:21:41.038 "state": "completed", 00:21:41.038 "digest": "sha512", 00:21:41.038 "dhgroup": "null" 00:21:41.038 } 00:21:41.038 } 00:21:41.038 ]' 00:21:41.038 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.297 11:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.555 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:41.555 11:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.495 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.753 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.012 00:21:43.012 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.012 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.012 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.271 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.271 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.271 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.271 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.271 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.271 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.271 { 00:21:43.271 "cntlid": 99, 00:21:43.271 "qid": 0, 00:21:43.271 "state": "enabled", 00:21:43.271 "thread": "nvmf_tgt_poll_group_000", 00:21:43.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.271 "listen_address": { 00:21:43.271 "trtype": "TCP", 00:21:43.271 "adrfam": "IPv4", 00:21:43.271 "traddr": "10.0.0.2", 00:21:43.271 "trsvcid": "4420" 00:21:43.271 }, 00:21:43.271 "peer_address": { 00:21:43.271 "trtype": "TCP", 00:21:43.271 "adrfam": "IPv4", 00:21:43.271 "traddr": "10.0.0.1", 00:21:43.271 "trsvcid": "38284" 00:21:43.271 }, 00:21:43.271 "auth": { 00:21:43.271 "state": "completed", 00:21:43.271 "digest": "sha512", 00:21:43.271 "dhgroup": "null" 00:21:43.271 } 00:21:43.271 } 00:21:43.271 ]' 00:21:43.529 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.529 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.529 11:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.529 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.529 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.529 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.529 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.529 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.787 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:43.787 11:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:44.729 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.730 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.989 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.246 00:21:45.246 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.246 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.246 11:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.504 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.504 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.504 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.504 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.762 { 00:21:45.762 "cntlid": 101, 00:21:45.762 "qid": 0, 00:21:45.762 "state": "enabled", 00:21:45.762 "thread": "nvmf_tgt_poll_group_000", 00:21:45.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.762 "listen_address": { 00:21:45.762 "trtype": "TCP", 00:21:45.762 "adrfam": "IPv4", 00:21:45.762 "traddr": "10.0.0.2", 00:21:45.762 "trsvcid": "4420" 00:21:45.762 }, 00:21:45.762 "peer_address": { 00:21:45.762 "trtype": "TCP", 00:21:45.762 "adrfam": "IPv4", 00:21:45.762 "traddr": "10.0.0.1", 00:21:45.762 "trsvcid": "38312" 00:21:45.762 }, 00:21:45.762 "auth": { 00:21:45.762 "state": "completed", 00:21:45.762 "digest": "sha512", 00:21:45.762 "dhgroup": "null" 00:21:45.762 } 00:21:45.762 } 00:21:45.762 ]' 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.762 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.763 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.021 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:46.021 11:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:46.958 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.216 11:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.475 00:21:47.475 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.475 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.475 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.734 { 00:21:47.734 "cntlid": 103, 00:21:47.734 "qid": 0, 00:21:47.734 "state": "enabled", 00:21:47.734 "thread": "nvmf_tgt_poll_group_000", 00:21:47.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.734 "listen_address": { 00:21:47.734 "trtype": "TCP", 00:21:47.734 "adrfam": "IPv4", 00:21:47.734 "traddr": "10.0.0.2", 00:21:47.734 "trsvcid": "4420" 00:21:47.734 }, 00:21:47.734 "peer_address": { 00:21:47.734 "trtype": "TCP", 00:21:47.734 "adrfam": "IPv4", 00:21:47.734 "traddr": "10.0.0.1", 00:21:47.734 "trsvcid": "38346" 00:21:47.734 }, 00:21:47.734 "auth": { 00:21:47.734 "state": "completed", 00:21:47.734 "digest": "sha512", 00:21:47.734 "dhgroup": "null" 00:21:47.734 } 00:21:47.734 } 00:21:47.734 ]' 00:21:47.734 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.992 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.250 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:48.250 11:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.187 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.445 11:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.704 00:21:49.704 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.704 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.704 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.962 { 00:21:49.962 "cntlid": 105, 00:21:49.962 "qid": 0, 00:21:49.962 "state": "enabled", 00:21:49.962 "thread": "nvmf_tgt_poll_group_000", 00:21:49.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.962 "listen_address": { 00:21:49.962 "trtype": "TCP", 00:21:49.962 "adrfam": "IPv4", 00:21:49.962 "traddr": "10.0.0.2", 00:21:49.962 "trsvcid": "4420" 00:21:49.962 }, 00:21:49.962 "peer_address": { 00:21:49.962 "trtype": "TCP", 00:21:49.962 "adrfam": "IPv4", 00:21:49.962 "traddr": "10.0.0.1", 00:21:49.962 "trsvcid": "59350" 00:21:49.962 }, 00:21:49.962 "auth": { 00:21:49.962 "state": "completed", 00:21:49.962 "digest": "sha512", 00:21:49.962 "dhgroup": "ffdhe2048" 00:21:49.962 } 00:21:49.962 } 00:21:49.962 ]' 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.962 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.220 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.220 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.220 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.220 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.220 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.478 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:50.478 11:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.413 11:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.672 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.930 00:21:51.930 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.930 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.930 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.188 { 00:21:52.188 "cntlid": 107, 00:21:52.188 "qid": 0, 00:21:52.188 "state": "enabled", 00:21:52.188 "thread": "nvmf_tgt_poll_group_000", 00:21:52.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.188 "listen_address": { 00:21:52.188 "trtype": "TCP", 00:21:52.188 "adrfam": "IPv4", 00:21:52.188 "traddr": "10.0.0.2", 00:21:52.188 "trsvcid": "4420" 00:21:52.188 }, 00:21:52.188 "peer_address": { 00:21:52.188 "trtype": "TCP", 00:21:52.188 "adrfam": "IPv4", 00:21:52.188 "traddr": "10.0.0.1", 00:21:52.188 "trsvcid": "59378" 00:21:52.188 }, 00:21:52.188 "auth": { 00:21:52.188 "state": "completed", 00:21:52.188 "digest": "sha512", 00:21:52.188 "dhgroup": "ffdhe2048" 00:21:52.188 } 00:21:52.188 } 00:21:52.188 ]' 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.188 11:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.756 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:52.756 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:21:53.325 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.325 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.325 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.325 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.585 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.585 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.585 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.585 11:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.844 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:53.845 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.103 00:21:54.103 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.103 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.103 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.361 { 00:21:54.361 "cntlid": 109, 00:21:54.361 "qid": 0, 00:21:54.361 "state": "enabled", 00:21:54.361 "thread": "nvmf_tgt_poll_group_000", 00:21:54.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.361 "listen_address": { 00:21:54.361 "trtype": "TCP", 00:21:54.361 "adrfam": "IPv4", 00:21:54.361 "traddr": "10.0.0.2", 00:21:54.361 "trsvcid": "4420" 00:21:54.361 }, 00:21:54.361 "peer_address": { 00:21:54.361 "trtype": "TCP", 00:21:54.361 "adrfam": "IPv4", 00:21:54.361 "traddr": "10.0.0.1", 00:21:54.361 "trsvcid": "59412" 00:21:54.361 }, 00:21:54.361 "auth": { 00:21:54.361 "state": "completed", 00:21:54.361 "digest": "sha512", 00:21:54.361 "dhgroup": "ffdhe2048" 00:21:54.361 } 00:21:54.361 } 00:21:54.361 ]' 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.361 11:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.632 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:54.632 11:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:55.571 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.829 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.395 00:21:56.395 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.395 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.395 11:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.653 { 00:21:56.653 "cntlid": 111, 00:21:56.653 "qid": 0, 00:21:56.653 "state": "enabled", 00:21:56.653 "thread": "nvmf_tgt_poll_group_000", 00:21:56.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.653 "listen_address": { 00:21:56.653 "trtype": "TCP", 00:21:56.653 "adrfam": "IPv4", 00:21:56.653 "traddr": "10.0.0.2", 00:21:56.653 "trsvcid": "4420" 00:21:56.653 }, 00:21:56.653 "peer_address": { 00:21:56.653 "trtype": "TCP", 00:21:56.653 "adrfam": "IPv4", 00:21:56.653 "traddr": "10.0.0.1", 00:21:56.653 "trsvcid": "59446" 00:21:56.653 }, 00:21:56.653 "auth": { 00:21:56.653 "state": "completed", 00:21:56.653 "digest": "sha512", 00:21:56.653 "dhgroup": "ffdhe2048" 00:21:56.653 } 00:21:56.653 } 00:21:56.653 ]' 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.653 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.912 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:56.912 11:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.850 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.109 11:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.674 00:21:58.674 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.674 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.674 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.674 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.933 { 00:21:58.933 "cntlid": 113, 00:21:58.933 "qid": 0, 00:21:58.933 "state": "enabled", 00:21:58.933 "thread": "nvmf_tgt_poll_group_000", 00:21:58.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.933 "listen_address": { 00:21:58.933 "trtype": "TCP", 00:21:58.933 "adrfam": "IPv4", 00:21:58.933 "traddr": "10.0.0.2", 00:21:58.933 "trsvcid": "4420" 00:21:58.933 }, 00:21:58.933 "peer_address": { 00:21:58.933 "trtype": "TCP", 00:21:58.933 "adrfam": "IPv4", 00:21:58.933 "traddr": "10.0.0.1", 00:21:58.933 "trsvcid": "59474" 00:21:58.933 }, 00:21:58.933 "auth": { 00:21:58.933 "state": "completed", 00:21:58.933 "digest": "sha512", 00:21:58.933 "dhgroup": "ffdhe3072" 00:21:58.933 } 00:21:58.933 } 00:21:58.933 ]' 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.933 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.191 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:21:59.191 11:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.128 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.386 11:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.386 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.386 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.386 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.386 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.953 00:22:00.953 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.953 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.953 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.211 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.211 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.211 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.211 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.211 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.211 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.211 { 00:22:01.211 "cntlid": 115, 00:22:01.211 "qid": 0, 00:22:01.211 "state": "enabled", 00:22:01.211 "thread": "nvmf_tgt_poll_group_000", 00:22:01.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.211 "listen_address": { 00:22:01.211 "trtype": "TCP", 00:22:01.211 "adrfam": "IPv4", 00:22:01.211 "traddr": "10.0.0.2", 00:22:01.211 "trsvcid": "4420" 00:22:01.211 }, 00:22:01.211 "peer_address": { 00:22:01.211 "trtype": "TCP", 00:22:01.212 "adrfam": "IPv4", 00:22:01.212 "traddr": "10.0.0.1", 00:22:01.212 "trsvcid": "43526" 00:22:01.212 }, 00:22:01.212 "auth": { 00:22:01.212 "state": "completed", 00:22:01.212 "digest": "sha512", 00:22:01.212 "dhgroup": "ffdhe3072" 00:22:01.212 } 00:22:01.212 } 00:22:01.212 ]' 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.212 11:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.471 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:01.471 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:02.408 11:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.666 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.924 00:22:03.185 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.185 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.185 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.442 { 00:22:03.442 "cntlid": 117, 00:22:03.442 "qid": 0, 00:22:03.442 "state": "enabled", 00:22:03.442 "thread": "nvmf_tgt_poll_group_000", 00:22:03.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.442 "listen_address": { 00:22:03.442 "trtype": "TCP", 00:22:03.442 "adrfam": "IPv4", 00:22:03.442 "traddr": "10.0.0.2", 00:22:03.442 "trsvcid": "4420" 00:22:03.442 }, 00:22:03.442 "peer_address": { 00:22:03.442 "trtype": "TCP", 00:22:03.442 "adrfam": "IPv4", 00:22:03.442 "traddr": "10.0.0.1", 00:22:03.442 "trsvcid": "43562" 00:22:03.442 }, 00:22:03.442 "auth": { 00:22:03.442 "state": "completed", 00:22:03.442 "digest": "sha512", 00:22:03.442 "dhgroup": "ffdhe3072" 00:22:03.442 } 00:22:03.442 } 00:22:03.442 ]' 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.442 11:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.700 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:03.700 11:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:04.637 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.894 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.152 00:22:05.152 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.152 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.152 11:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.411 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.411 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.411 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.412 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.670 { 00:22:05.670 "cntlid": 119, 00:22:05.670 "qid": 0, 00:22:05.670 "state": "enabled", 00:22:05.670 "thread": "nvmf_tgt_poll_group_000", 00:22:05.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:05.670 "listen_address": { 00:22:05.670 "trtype": "TCP", 00:22:05.670 "adrfam": "IPv4", 00:22:05.670 "traddr": "10.0.0.2", 00:22:05.670 "trsvcid": "4420" 00:22:05.670 }, 00:22:05.670 "peer_address": { 00:22:05.670 "trtype": "TCP", 00:22:05.670 "adrfam": "IPv4", 00:22:05.670 "traddr": "10.0.0.1", 00:22:05.670 "trsvcid": "43592" 00:22:05.670 }, 00:22:05.670 "auth": { 00:22:05.670 "state": "completed", 00:22:05.670 "digest": "sha512", 00:22:05.670 "dhgroup": "ffdhe3072" 00:22:05.670 } 00:22:05.670 } 00:22:05.670 ]' 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.670 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.929 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:05.929 11:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.865 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.123 11:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.382 00:22:07.382 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.382 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.382 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.950 { 00:22:07.950 "cntlid": 121, 00:22:07.950 "qid": 0, 00:22:07.950 "state": "enabled", 00:22:07.950 "thread": "nvmf_tgt_poll_group_000", 00:22:07.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:07.950 "listen_address": { 00:22:07.950 "trtype": "TCP", 00:22:07.950 "adrfam": "IPv4", 00:22:07.950 "traddr": "10.0.0.2", 00:22:07.950 "trsvcid": "4420" 00:22:07.950 }, 00:22:07.950 "peer_address": { 00:22:07.950 "trtype": "TCP", 00:22:07.950 "adrfam": "IPv4", 00:22:07.950 "traddr": "10.0.0.1", 00:22:07.950 "trsvcid": "43624" 00:22:07.950 }, 00:22:07.950 "auth": { 00:22:07.950 "state": "completed", 00:22:07.950 "digest": "sha512", 00:22:07.950 "dhgroup": "ffdhe4096" 00:22:07.950 } 00:22:07.950 } 00:22:07.950 ]' 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.950 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.208 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:08.208 11:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.148 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.405 11:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.663 00:22:09.663 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.663 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.663 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.922 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.182 { 00:22:10.182 "cntlid": 123, 00:22:10.182 "qid": 0, 00:22:10.182 "state": "enabled", 00:22:10.182 "thread": "nvmf_tgt_poll_group_000", 00:22:10.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.182 "listen_address": { 00:22:10.182 "trtype": "TCP", 00:22:10.182 "adrfam": "IPv4", 00:22:10.182 "traddr": "10.0.0.2", 00:22:10.182 "trsvcid": "4420" 00:22:10.182 }, 00:22:10.182 "peer_address": { 00:22:10.182 "trtype": "TCP", 00:22:10.182 "adrfam": "IPv4", 00:22:10.182 "traddr": "10.0.0.1", 00:22:10.182 "trsvcid": "54144" 00:22:10.182 }, 00:22:10.182 "auth": { 00:22:10.182 "state": "completed", 00:22:10.182 "digest": "sha512", 00:22:10.182 "dhgroup": "ffdhe4096" 00:22:10.182 } 00:22:10.182 } 00:22:10.182 ]' 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.182 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.183 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.183 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.183 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.183 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.441 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:10.441 11:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:11.380 11:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.638 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.205 00:22:12.205 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.205 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.205 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.463 { 00:22:12.463 "cntlid": 125, 00:22:12.463 "qid": 0, 00:22:12.463 "state": "enabled", 00:22:12.463 "thread": "nvmf_tgt_poll_group_000", 00:22:12.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.463 "listen_address": { 00:22:12.463 "trtype": "TCP", 00:22:12.463 "adrfam": "IPv4", 00:22:12.463 "traddr": "10.0.0.2", 00:22:12.463 "trsvcid": "4420" 00:22:12.463 }, 00:22:12.463 "peer_address": { 00:22:12.463 "trtype": "TCP", 00:22:12.463 "adrfam": "IPv4", 00:22:12.463 "traddr": "10.0.0.1", 00:22:12.463 "trsvcid": "54184" 00:22:12.463 }, 00:22:12.463 "auth": { 00:22:12.463 "state": "completed", 00:22:12.463 "digest": "sha512", 00:22:12.463 "dhgroup": "ffdhe4096" 00:22:12.463 } 00:22:12.463 } 00:22:12.463 ]' 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:12.463 11:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.463 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.463 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.463 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.722 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:12.722 11:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:13.660 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.661 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.919 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.487 00:22:14.487 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.487 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.487 11:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.487 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.487 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.487 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.487 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.744 { 00:22:14.744 "cntlid": 127, 00:22:14.744 "qid": 0, 00:22:14.744 "state": "enabled", 00:22:14.744 "thread": "nvmf_tgt_poll_group_000", 00:22:14.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:14.744 "listen_address": { 00:22:14.744 "trtype": "TCP", 00:22:14.744 "adrfam": "IPv4", 00:22:14.744 "traddr": "10.0.0.2", 00:22:14.744 "trsvcid": "4420" 00:22:14.744 }, 00:22:14.744 "peer_address": { 00:22:14.744 "trtype": "TCP", 00:22:14.744 "adrfam": "IPv4", 00:22:14.744 "traddr": "10.0.0.1", 00:22:14.744 "trsvcid": "54216" 00:22:14.744 }, 00:22:14.744 "auth": { 00:22:14.744 "state": "completed", 00:22:14.744 "digest": "sha512", 00:22:14.744 "dhgroup": "ffdhe4096" 00:22:14.744 } 00:22:14.744 } 00:22:14.744 ]' 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.744 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.001 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:15.001 11:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.937 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.195 11:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.761 00:22:16.761 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.761 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.761 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.020 { 00:22:17.020 "cntlid": 129, 00:22:17.020 "qid": 0, 00:22:17.020 "state": "enabled", 00:22:17.020 "thread": "nvmf_tgt_poll_group_000", 00:22:17.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.020 "listen_address": { 00:22:17.020 "trtype": "TCP", 00:22:17.020 "adrfam": "IPv4", 00:22:17.020 "traddr": "10.0.0.2", 00:22:17.020 "trsvcid": "4420" 00:22:17.020 }, 00:22:17.020 "peer_address": { 00:22:17.020 "trtype": "TCP", 00:22:17.020 "adrfam": "IPv4", 00:22:17.020 "traddr": "10.0.0.1", 00:22:17.020 "trsvcid": "54228" 00:22:17.020 }, 00:22:17.020 "auth": { 00:22:17.020 "state": "completed", 00:22:17.020 "digest": "sha512", 00:22:17.020 "dhgroup": "ffdhe6144" 00:22:17.020 } 00:22:17.020 } 00:22:17.020 ]' 00:22:17.020 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.278 11:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.536 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:17.537 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:18.472 11:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.730 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.297 00:22:19.297 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.297 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.297 11:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.555 { 00:22:19.555 "cntlid": 131, 00:22:19.555 "qid": 0, 00:22:19.555 "state": "enabled", 00:22:19.555 "thread": "nvmf_tgt_poll_group_000", 00:22:19.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.555 "listen_address": { 00:22:19.555 "trtype": "TCP", 00:22:19.555 "adrfam": "IPv4", 00:22:19.555 "traddr": "10.0.0.2", 00:22:19.555 "trsvcid": "4420" 00:22:19.555 }, 00:22:19.555 "peer_address": { 00:22:19.555 "trtype": "TCP", 00:22:19.555 "adrfam": "IPv4", 00:22:19.555 "traddr": "10.0.0.1", 00:22:19.555 "trsvcid": "49786" 00:22:19.555 }, 00:22:19.555 "auth": { 00:22:19.555 "state": "completed", 00:22:19.555 "digest": "sha512", 00:22:19.555 "dhgroup": "ffdhe6144" 00:22:19.555 } 00:22:19.555 } 00:22:19.555 ]' 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.555 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.813 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:19.813 11:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.747 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.313 11:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.880 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.880 { 00:22:21.880 "cntlid": 133, 00:22:21.880 "qid": 0, 00:22:21.880 "state": "enabled", 00:22:21.880 "thread": "nvmf_tgt_poll_group_000", 00:22:21.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.880 "listen_address": { 00:22:21.880 "trtype": "TCP", 00:22:21.880 "adrfam": "IPv4", 00:22:21.880 "traddr": "10.0.0.2", 00:22:21.880 "trsvcid": "4420" 00:22:21.880 }, 00:22:21.880 "peer_address": { 00:22:21.880 "trtype": "TCP", 00:22:21.880 "adrfam": "IPv4", 00:22:21.880 "traddr": "10.0.0.1", 00:22:21.880 "trsvcid": "49800" 00:22:21.880 }, 00:22:21.880 "auth": { 00:22:21.880 "state": "completed", 00:22:21.880 "digest": "sha512", 00:22:21.880 "dhgroup": "ffdhe6144" 00:22:21.880 } 00:22:21.880 } 00:22:21.880 ]' 00:22:21.880 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.138 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.396 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:22.396 11:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.330 11:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.587 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.157 00:22:24.157 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.157 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.157 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.415 { 00:22:24.415 "cntlid": 135, 00:22:24.415 "qid": 0, 00:22:24.415 "state": "enabled", 00:22:24.415 "thread": "nvmf_tgt_poll_group_000", 00:22:24.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.415 "listen_address": { 00:22:24.415 "trtype": "TCP", 00:22:24.415 "adrfam": "IPv4", 00:22:24.415 "traddr": "10.0.0.2", 00:22:24.415 "trsvcid": "4420" 00:22:24.415 }, 00:22:24.415 "peer_address": { 00:22:24.415 "trtype": "TCP", 00:22:24.415 "adrfam": "IPv4", 00:22:24.415 "traddr": "10.0.0.1", 00:22:24.415 "trsvcid": "49832" 00:22:24.415 }, 00:22:24.415 "auth": { 00:22:24.415 "state": "completed", 00:22:24.415 "digest": "sha512", 00:22:24.415 "dhgroup": "ffdhe6144" 00:22:24.415 } 00:22:24.415 } 00:22:24.415 ]' 00:22:24.415 11:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.415 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.415 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.415 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:24.415 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.673 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.673 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.673 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.933 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:24.933 11:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.870 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.129 11:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.093 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.093 { 00:22:27.093 "cntlid": 137, 00:22:27.093 "qid": 0, 00:22:27.093 "state": "enabled", 00:22:27.093 "thread": "nvmf_tgt_poll_group_000", 00:22:27.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:27.093 "listen_address": { 00:22:27.093 "trtype": "TCP", 00:22:27.093 "adrfam": "IPv4", 00:22:27.093 "traddr": "10.0.0.2", 00:22:27.093 "trsvcid": "4420" 00:22:27.093 }, 00:22:27.093 "peer_address": { 00:22:27.093 "trtype": "TCP", 00:22:27.093 "adrfam": "IPv4", 00:22:27.093 "traddr": "10.0.0.1", 00:22:27.093 "trsvcid": "49862" 00:22:27.093 }, 00:22:27.093 "auth": { 00:22:27.093 "state": "completed", 00:22:27.093 "digest": "sha512", 00:22:27.093 "dhgroup": "ffdhe8192" 00:22:27.093 } 00:22:27.093 } 00:22:27.093 ]' 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.093 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.405 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.406 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.406 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.406 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.406 11:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.674 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:27.674 11:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.679 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.952 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.953 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.953 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.953 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.953 11:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.927 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.927 { 00:22:29.927 "cntlid": 139, 00:22:29.927 "qid": 0, 00:22:29.927 "state": "enabled", 00:22:29.927 "thread": "nvmf_tgt_poll_group_000", 00:22:29.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.927 "listen_address": { 00:22:29.927 "trtype": "TCP", 00:22:29.927 "adrfam": "IPv4", 00:22:29.927 "traddr": "10.0.0.2", 00:22:29.927 "trsvcid": "4420" 00:22:29.927 }, 00:22:29.927 "peer_address": { 00:22:29.927 "trtype": "TCP", 00:22:29.927 "adrfam": "IPv4", 00:22:29.927 "traddr": "10.0.0.1", 00:22:29.927 "trsvcid": "38972" 00:22:29.927 }, 00:22:29.927 "auth": { 00:22:29.927 "state": "completed", 00:22:29.927 "digest": "sha512", 00:22:29.927 "dhgroup": "ffdhe8192" 00:22:29.927 } 00:22:29.927 } 00:22:29.927 ]' 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.927 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.185 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.185 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.185 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.185 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.185 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.443 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:30.443 11:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: --dhchap-ctrl-secret DHHC-1:02:Y2ZmODI0OGEyMDgxMzc5OGE3ZGUzOWZkNGEzMDkyZmZiNjIyMzhmMWE2YWJkNjVkHubQVg==: 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.375 11:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.632 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.570 00:22:32.570 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.570 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.570 11:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.570 { 00:22:32.570 "cntlid": 141, 00:22:32.570 "qid": 0, 00:22:32.570 "state": "enabled", 00:22:32.570 "thread": "nvmf_tgt_poll_group_000", 00:22:32.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.570 "listen_address": { 00:22:32.570 "trtype": "TCP", 00:22:32.570 "adrfam": "IPv4", 00:22:32.570 "traddr": "10.0.0.2", 00:22:32.570 "trsvcid": "4420" 00:22:32.570 }, 00:22:32.570 "peer_address": { 00:22:32.570 "trtype": "TCP", 00:22:32.570 "adrfam": "IPv4", 00:22:32.570 "traddr": "10.0.0.1", 00:22:32.570 "trsvcid": "39004" 00:22:32.570 }, 00:22:32.570 "auth": { 00:22:32.570 "state": "completed", 00:22:32.570 "digest": "sha512", 00:22:32.570 "dhgroup": "ffdhe8192" 00:22:32.570 } 00:22:32.570 } 00:22:32.570 ]' 00:22:32.570 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.828 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.086 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:33.086 11:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:01:YTRmNjdjMWE1MTBkZWY2ZTk5NTA5ZDEwNzRhN2IyM2YAElMe: 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.020 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:34.278 11:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:35.225 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.225 { 00:22:35.225 "cntlid": 143, 00:22:35.225 "qid": 0, 00:22:35.225 "state": "enabled", 00:22:35.225 "thread": "nvmf_tgt_poll_group_000", 00:22:35.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.225 "listen_address": { 00:22:35.225 "trtype": "TCP", 00:22:35.225 "adrfam": "IPv4", 00:22:35.225 "traddr": "10.0.0.2", 00:22:35.225 "trsvcid": "4420" 00:22:35.225 }, 00:22:35.225 "peer_address": { 00:22:35.225 "trtype": "TCP", 00:22:35.225 "adrfam": "IPv4", 00:22:35.225 "traddr": "10.0.0.1", 00:22:35.225 "trsvcid": "39026" 00:22:35.225 }, 00:22:35.225 "auth": { 00:22:35.225 "state": "completed", 00:22:35.225 "digest": "sha512", 00:22:35.225 "dhgroup": "ffdhe8192" 00:22:35.225 } 00:22:35.225 } 00:22:35.225 ]' 00:22:35.225 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.484 11:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.742 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:35.742 11:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.676 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:36.934 11:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.868 00:22:37.868 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.868 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.868 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.126 { 00:22:38.126 "cntlid": 145, 00:22:38.126 "qid": 0, 00:22:38.126 "state": "enabled", 00:22:38.126 "thread": "nvmf_tgt_poll_group_000", 00:22:38.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.126 "listen_address": { 00:22:38.126 "trtype": "TCP", 00:22:38.126 "adrfam": "IPv4", 00:22:38.126 "traddr": "10.0.0.2", 00:22:38.126 "trsvcid": "4420" 00:22:38.126 }, 00:22:38.126 "peer_address": { 00:22:38.126 "trtype": "TCP", 00:22:38.126 "adrfam": "IPv4", 00:22:38.126 "traddr": "10.0.0.1", 00:22:38.126 "trsvcid": "39048" 00:22:38.126 }, 00:22:38.126 "auth": { 00:22:38.126 "state": "completed", 00:22:38.126 "digest": "sha512", 00:22:38.126 "dhgroup": "ffdhe8192" 00:22:38.126 } 00:22:38.126 } 00:22:38.126 ]' 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.126 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.384 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:38.384 11:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:N2M4NzhkNjFjNjJjMDZkZmU2NjIzNjBjMWJkYmJlMzEwMzhkZDJmZGI2YzE2NDBlMM99qg==: --dhchap-ctrl-secret DHHC-1:03:ZjdlZTI3ZWU3NWU5OTBjNGZlNDEzZDZmOTE1MzNkYTZhZWVlMDZkYzM1Y2EyZmFmNTI4ZDdlMDhhMjFmNGQ4OO+eA3I=: 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:39.318 11:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:40.252 request: 00:22:40.252 { 00:22:40.252 "name": "nvme0", 00:22:40.252 "trtype": "tcp", 00:22:40.252 "traddr": "10.0.0.2", 00:22:40.252 "adrfam": "ipv4", 00:22:40.252 "trsvcid": "4420", 00:22:40.252 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:40.252 "prchk_reftag": false, 00:22:40.252 "prchk_guard": false, 00:22:40.252 "hdgst": false, 00:22:40.252 "ddgst": false, 00:22:40.252 "dhchap_key": "key2", 00:22:40.252 "allow_unrecognized_csi": false, 00:22:40.252 "method": "bdev_nvme_attach_controller", 00:22:40.252 "req_id": 1 00:22:40.252 } 00:22:40.252 Got JSON-RPC error response 00:22:40.252 response: 00:22:40.252 { 00:22:40.252 "code": -5, 00:22:40.252 "message": "Input/output error" 00:22:40.252 } 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:40.252 11:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:41.186 request: 00:22:41.186 { 00:22:41.186 "name": "nvme0", 00:22:41.186 "trtype": "tcp", 00:22:41.186 "traddr": "10.0.0.2", 00:22:41.186 "adrfam": "ipv4", 00:22:41.186 "trsvcid": "4420", 00:22:41.186 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.186 "prchk_reftag": false, 00:22:41.186 "prchk_guard": false, 00:22:41.186 "hdgst": false, 00:22:41.186 "ddgst": false, 00:22:41.186 "dhchap_key": "key1", 00:22:41.186 "dhchap_ctrlr_key": "ckey2", 00:22:41.186 "allow_unrecognized_csi": false, 00:22:41.186 "method": "bdev_nvme_attach_controller", 00:22:41.186 "req_id": 1 00:22:41.186 } 00:22:41.186 Got JSON-RPC error response 00:22:41.186 response: 00:22:41.186 { 00:22:41.186 "code": -5, 00:22:41.186 "message": "Input/output error" 00:22:41.186 } 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.186 11:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:41.752 request: 00:22:41.752 { 00:22:41.752 "name": "nvme0", 00:22:41.752 "trtype": "tcp", 00:22:41.752 "traddr": "10.0.0.2", 00:22:41.752 "adrfam": "ipv4", 00:22:41.752 "trsvcid": "4420", 00:22:41.752 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.752 "prchk_reftag": false, 00:22:41.752 "prchk_guard": false, 00:22:41.752 "hdgst": false, 00:22:41.752 "ddgst": false, 00:22:41.752 "dhchap_key": "key1", 00:22:41.752 "dhchap_ctrlr_key": "ckey1", 00:22:41.752 "allow_unrecognized_csi": false, 00:22:41.752 "method": "bdev_nvme_attach_controller", 00:22:41.752 "req_id": 1 00:22:41.752 } 00:22:41.752 Got JSON-RPC error response 00:22:41.752 response: 00:22:41.752 { 00:22:41.752 "code": -5, 00:22:41.752 "message": "Input/output error" 00:22:41.752 } 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 239915 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 239915 ']' 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 239915 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239915 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239915' 00:22:41.752 killing process with pid 239915 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 239915 00:22:41.752 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 239915 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=262932 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 262932 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 262932 ']' 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.010 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 262932 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 262932 ']' 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:42.269 11:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.527 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.527 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:42.527 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:42.527 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.527 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 null0 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eMO 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Vxl ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Vxl 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AXX 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.IP6 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IP6 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zNe 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.sEP ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sEP 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5Bl 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.786 11:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.159 nvme0n1 00:22:44.159 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.159 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.159 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.416 { 00:22:44.416 "cntlid": 1, 00:22:44.416 "qid": 0, 00:22:44.416 "state": "enabled", 00:22:44.416 "thread": "nvmf_tgt_poll_group_000", 00:22:44.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.416 "listen_address": { 00:22:44.416 "trtype": "TCP", 00:22:44.416 "adrfam": "IPv4", 00:22:44.416 "traddr": "10.0.0.2", 00:22:44.416 "trsvcid": "4420" 00:22:44.416 }, 00:22:44.416 "peer_address": { 00:22:44.416 "trtype": "TCP", 00:22:44.416 "adrfam": "IPv4", 00:22:44.416 "traddr": "10.0.0.1", 00:22:44.416 "trsvcid": "58226" 00:22:44.416 }, 00:22:44.416 "auth": { 00:22:44.416 "state": "completed", 00:22:44.416 "digest": "sha512", 00:22:44.416 "dhgroup": "ffdhe8192" 00:22:44.416 } 00:22:44.416 } 00:22:44.416 ]' 00:22:44.416 11:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.416 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.416 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.416 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:44.416 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.673 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.673 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.673 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.930 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:44.930 11:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:45.863 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.121 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.380 request: 00:22:46.380 { 00:22:46.380 "name": "nvme0", 00:22:46.380 "trtype": "tcp", 00:22:46.380 "traddr": "10.0.0.2", 00:22:46.380 "adrfam": "ipv4", 00:22:46.380 "trsvcid": "4420", 00:22:46.380 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.380 "prchk_reftag": false, 00:22:46.380 "prchk_guard": false, 00:22:46.380 "hdgst": false, 00:22:46.380 "ddgst": false, 00:22:46.380 "dhchap_key": "key3", 00:22:46.380 "allow_unrecognized_csi": false, 00:22:46.380 "method": "bdev_nvme_attach_controller", 00:22:46.380 "req_id": 1 00:22:46.380 } 00:22:46.380 Got JSON-RPC error response 00:22:46.380 response: 00:22:46.380 { 00:22:46.380 "code": -5, 00:22:46.380 "message": "Input/output error" 00:22:46.380 } 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:46.380 11:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.638 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.896 request: 00:22:46.896 { 00:22:46.896 "name": "nvme0", 00:22:46.896 "trtype": "tcp", 00:22:46.896 "traddr": "10.0.0.2", 00:22:46.896 "adrfam": "ipv4", 00:22:46.896 "trsvcid": "4420", 00:22:46.896 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.896 "prchk_reftag": false, 00:22:46.896 "prchk_guard": false, 00:22:46.896 "hdgst": false, 00:22:46.896 "ddgst": false, 00:22:46.896 "dhchap_key": "key3", 00:22:46.896 "allow_unrecognized_csi": false, 00:22:46.896 "method": "bdev_nvme_attach_controller", 00:22:46.896 "req_id": 1 00:22:46.896 } 00:22:46.896 Got JSON-RPC error response 00:22:46.896 response: 00:22:46.896 { 00:22:46.896 "code": -5, 00:22:46.896 "message": "Input/output error" 00:22:46.896 } 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.896 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.154 11:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:47.721 request: 00:22:47.721 { 00:22:47.721 "name": "nvme0", 00:22:47.721 "trtype": "tcp", 00:22:47.721 "traddr": "10.0.0.2", 00:22:47.721 "adrfam": "ipv4", 00:22:47.721 "trsvcid": "4420", 00:22:47.721 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:47.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:47.721 "prchk_reftag": false, 00:22:47.721 "prchk_guard": false, 00:22:47.721 "hdgst": false, 00:22:47.721 "ddgst": false, 00:22:47.721 "dhchap_key": "key0", 00:22:47.721 "dhchap_ctrlr_key": "key1", 00:22:47.721 "allow_unrecognized_csi": false, 00:22:47.721 "method": "bdev_nvme_attach_controller", 00:22:47.721 "req_id": 1 00:22:47.721 } 00:22:47.721 Got JSON-RPC error response 00:22:47.721 response: 00:22:47.721 { 00:22:47.721 "code": -5, 00:22:47.721 "message": "Input/output error" 00:22:47.721 } 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:47.721 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:47.979 nvme0n1 00:22:47.979 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:47.979 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.979 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:48.236 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.493 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.493 11:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.750 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:48.750 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.750 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.750 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.750 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:48.751 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:48.751 11:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:50.122 nvme0n1 00:22:50.122 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:50.122 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.122 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:50.380 11:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.638 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.638 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:50.638 11:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: --dhchap-ctrl-secret DHHC-1:03:OGQ2ZmI4ZWIwYzQyZjllOTY2ZjI4MDIyNjRjNTE5NTZmOGE5MmFkM2JiMTE4YWU5YWIzYTZlZWRhOGNlNzI1NJKt3aQ=: 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.572 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:51.830 11:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:52.763 request: 00:22:52.763 { 00:22:52.763 "name": "nvme0", 00:22:52.763 "trtype": "tcp", 00:22:52.763 "traddr": "10.0.0.2", 00:22:52.763 "adrfam": "ipv4", 00:22:52.763 "trsvcid": "4420", 00:22:52.763 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:52.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:52.763 "prchk_reftag": false, 00:22:52.763 "prchk_guard": false, 00:22:52.763 "hdgst": false, 00:22:52.763 "ddgst": false, 00:22:52.763 "dhchap_key": "key1", 00:22:52.763 "allow_unrecognized_csi": false, 00:22:52.763 "method": "bdev_nvme_attach_controller", 00:22:52.763 "req_id": 1 00:22:52.763 } 00:22:52.763 Got JSON-RPC error response 00:22:52.763 response: 00:22:52.763 { 00:22:52.763 "code": -5, 00:22:52.763 "message": "Input/output error" 00:22:52.763 } 00:22:52.763 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:52.764 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:52.764 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:52.764 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:52.764 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.764 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.764 11:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:54.137 nvme0n1 00:22:54.137 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:54.137 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:54.137 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.395 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.395 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.395 11:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:54.666 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:54.925 nvme0n1 00:22:54.925 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:54.925 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.925 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:55.183 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.183 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.183 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.441 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:55.441 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.441 11:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: '' 2s 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: ]] 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MmE3MTM3MmQyOWMwNmFlNTgwMGVkZjAyY2FjZGQ2MTQr1bRM: 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:55.441 11:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: 2s 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: ]] 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODczYWMzN2E1NzAzMWUwMTRmNmM5YWFkNjc0YTdjZWYyZThlMDc4NmI3NWM1MjY0pRaW6g==: 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:57.968 11:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.867 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.868 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.868 11:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:01.242 nvme0n1 00:23:01.242 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.242 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.242 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.242 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.242 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.242 11:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.806 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:01.806 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:01.806 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:02.064 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:02.322 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:02.322 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:02.322 11:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.580 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:03.514 request: 00:23:03.514 { 00:23:03.514 "name": "nvme0", 00:23:03.514 "dhchap_key": "key1", 00:23:03.514 "dhchap_ctrlr_key": "key3", 00:23:03.514 "method": "bdev_nvme_set_keys", 00:23:03.514 "req_id": 1 00:23:03.514 } 00:23:03.514 Got JSON-RPC error response 00:23:03.514 response: 00:23:03.514 { 00:23:03.514 "code": -13, 00:23:03.514 "message": "Permission denied" 00:23:03.514 } 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.514 11:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:03.772 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:03.772 11:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:04.705 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:04.705 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:04.706 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:04.971 11:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:06.344 nvme0n1 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.344 11:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:07.278 request: 00:23:07.278 { 00:23:07.278 "name": "nvme0", 00:23:07.278 "dhchap_key": "key2", 00:23:07.278 "dhchap_ctrlr_key": "key0", 00:23:07.278 "method": "bdev_nvme_set_keys", 00:23:07.278 "req_id": 1 00:23:07.278 } 00:23:07.278 Got JSON-RPC error response 00:23:07.278 response: 00:23:07.278 { 00:23:07.278 "code": -13, 00:23:07.278 "message": "Permission denied" 00:23:07.278 } 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.278 11:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:07.536 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:07.536 11:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:08.469 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:08.469 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:08.469 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 239944 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 239944 ']' 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 239944 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.727 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 239944 00:23:08.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:08.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:08.984 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 239944' 00:23:08.984 killing process with pid 239944 00:23:08.985 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 239944 00:23:08.985 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 239944 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.242 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.242 rmmod nvme_tcp 00:23:09.242 rmmod nvme_fabrics 00:23:09.243 rmmod nvme_keyring 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 262932 ']' 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 262932 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 262932 ']' 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 262932 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 262932 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 262932' 00:23:09.243 killing process with pid 262932 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 262932 00:23:09.243 11:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 262932 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.501 11:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.eMO /tmp/spdk.key-sha256.AXX /tmp/spdk.key-sha384.zNe /tmp/spdk.key-sha512.5Bl /tmp/spdk.key-sha512.Vxl /tmp/spdk.key-sha384.IP6 /tmp/spdk.key-sha256.sEP '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:12.033 00:23:12.033 real 3m33.156s 00:23:12.033 user 8m18.323s 00:23:12.033 sys 0m28.095s 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.033 ************************************ 00:23:12.033 END TEST nvmf_auth_target 00:23:12.033 ************************************ 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:12.033 ************************************ 00:23:12.033 START TEST nvmf_bdevio_no_huge 00:23:12.033 ************************************ 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:12.033 * Looking for test storage... 00:23:12.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:12.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.033 --rc genhtml_branch_coverage=1 00:23:12.033 --rc genhtml_function_coverage=1 00:23:12.033 --rc genhtml_legend=1 00:23:12.033 --rc geninfo_all_blocks=1 00:23:12.033 --rc geninfo_unexecuted_blocks=1 00:23:12.033 00:23:12.033 ' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:12.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.033 --rc genhtml_branch_coverage=1 00:23:12.033 --rc genhtml_function_coverage=1 00:23:12.033 --rc genhtml_legend=1 00:23:12.033 --rc geninfo_all_blocks=1 00:23:12.033 --rc geninfo_unexecuted_blocks=1 00:23:12.033 00:23:12.033 ' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:12.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.033 --rc genhtml_branch_coverage=1 00:23:12.033 --rc genhtml_function_coverage=1 00:23:12.033 --rc genhtml_legend=1 00:23:12.033 --rc geninfo_all_blocks=1 00:23:12.033 --rc geninfo_unexecuted_blocks=1 00:23:12.033 00:23:12.033 ' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:12.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.033 --rc genhtml_branch_coverage=1 00:23:12.033 --rc genhtml_function_coverage=1 00:23:12.033 --rc genhtml_legend=1 00:23:12.033 --rc geninfo_all_blocks=1 00:23:12.033 --rc geninfo_unexecuted_blocks=1 00:23:12.033 00:23:12.033 ' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.033 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.034 11:17:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:13.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:13.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.945 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:13.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:13.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:23:13.946 00:23:13.946 --- 10.0.0.2 ping statistics --- 00:23:13.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.946 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:13.946 00:23:13.946 --- 10.0.0.1 ping statistics --- 00:23:13.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.946 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=268189 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 268189 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 268189 ']' 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.946 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:13.946 [2024-11-17 11:17:38.536002] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:13.946 [2024-11-17 11:17:38.536078] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:14.205 [2024-11-17 11:17:38.611666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.205 [2024-11-17 11:17:38.657188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.205 [2024-11-17 11:17:38.657248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.205 [2024-11-17 11:17:38.657276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.205 [2024-11-17 11:17:38.657287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.205 [2024-11-17 11:17:38.657301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.205 [2024-11-17 11:17:38.658394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.205 [2024-11-17 11:17:38.658455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:14.205 [2024-11-17 11:17:38.658512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:14.205 [2024-11-17 11:17:38.658514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.205 [2024-11-17 11:17:38.813473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.205 Malloc0 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.205 [2024-11-17 11:17:38.852151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.205 { 00:23:14.205 "params": { 00:23:14.205 "name": "Nvme$subsystem", 00:23:14.205 "trtype": "$TEST_TRANSPORT", 00:23:14.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.205 "adrfam": "ipv4", 00:23:14.205 "trsvcid": "$NVMF_PORT", 00:23:14.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.205 "hdgst": ${hdgst:-false}, 00:23:14.205 "ddgst": ${ddgst:-false} 00:23:14.205 }, 00:23:14.205 "method": "bdev_nvme_attach_controller" 00:23:14.205 } 00:23:14.205 EOF 00:23:14.205 )") 00:23:14.205 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:14.464 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:14.464 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:14.464 11:17:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:14.464 "params": { 00:23:14.464 "name": "Nvme1", 00:23:14.464 "trtype": "tcp", 00:23:14.464 "traddr": "10.0.0.2", 00:23:14.464 "adrfam": "ipv4", 00:23:14.464 "trsvcid": "4420", 00:23:14.464 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.464 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.464 "hdgst": false, 00:23:14.464 "ddgst": false 00:23:14.464 }, 00:23:14.464 "method": "bdev_nvme_attach_controller" 00:23:14.464 }' 00:23:14.464 [2024-11-17 11:17:38.901603] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:14.464 [2024-11-17 11:17:38.901674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid268214 ] 00:23:14.464 [2024-11-17 11:17:38.969497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:14.464 [2024-11-17 11:17:39.019576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.464 [2024-11-17 11:17:39.019603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.464 [2024-11-17 11:17:39.019607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.722 I/O targets: 00:23:14.722 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:14.722 00:23:14.722 00:23:14.722 CUnit - A unit testing framework for C - Version 2.1-3 00:23:14.722 http://cunit.sourceforge.net/ 00:23:14.722 00:23:14.722 00:23:14.722 Suite: bdevio tests on: Nvme1n1 00:23:14.722 Test: blockdev write read block ...passed 00:23:14.979 Test: blockdev write zeroes read block ...passed 00:23:14.979 Test: blockdev write zeroes read no split ...passed 00:23:14.979 Test: blockdev write zeroes read split ...passed 00:23:14.979 Test: blockdev write zeroes read split partial ...passed 00:23:14.979 Test: blockdev reset ...[2024-11-17 11:17:39.492654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:14.979 [2024-11-17 11:17:39.492779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13066a0 (9): Bad file descriptor 00:23:15.236 [2024-11-17 11:17:39.636509] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:15.236 passed 00:23:15.236 Test: blockdev write read 8 blocks ...passed 00:23:15.236 Test: blockdev write read size > 128k ...passed 00:23:15.236 Test: blockdev write read invalid size ...passed 00:23:15.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:15.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:15.236 Test: blockdev write read max offset ...passed 00:23:15.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:15.237 Test: blockdev writev readv 8 blocks ...passed 00:23:15.237 Test: blockdev writev readv 30 x 1block ...passed 00:23:15.237 Test: blockdev writev readv block ...passed 00:23:15.237 Test: blockdev writev readv size > 128k ...passed 00:23:15.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:15.237 Test: blockdev comparev and writev ...[2024-11-17 11:17:39.892510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.237 [2024-11-17 11:17:39.892554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.237 [2024-11-17 11:17:39.892587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.237 [2024-11-17 11:17:39.892606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.892957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.495 [2024-11-17 11:17:39.892982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.893003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.495 [2024-11-17 11:17:39.893020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.893355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.495 [2024-11-17 11:17:39.893379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.893400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.495 [2024-11-17 11:17:39.893416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.893724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.495 [2024-11-17 11:17:39.893749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.893770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.495 [2024-11-17 11:17:39.893786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.495 passed 00:23:15.495 Test: blockdev nvme passthru rw ...passed 00:23:15.495 Test: blockdev nvme passthru vendor specific ...[2024-11-17 11:17:39.977768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.495 [2024-11-17 11:17:39.977794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.977931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.495 [2024-11-17 11:17:39.977953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.978089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.495 [2024-11-17 11:17:39.978111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.495 [2024-11-17 11:17:39.978247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.495 [2024-11-17 11:17:39.978270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.495 passed 00:23:15.495 Test: blockdev nvme admin passthru ...passed 00:23:15.495 Test: blockdev copy ...passed 00:23:15.495 00:23:15.495 Run Summary: Type Total Ran Passed Failed Inactive 00:23:15.495 suites 1 1 n/a 0 0 00:23:15.495 tests 23 23 23 0 0 00:23:15.495 asserts 152 152 152 0 n/a 00:23:15.495 00:23:15.495 Elapsed time = 1.418 seconds 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.754 rmmod nvme_tcp 00:23:15.754 rmmod nvme_fabrics 00:23:15.754 rmmod nvme_keyring 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 268189 ']' 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 268189 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 268189 ']' 00:23:15.754 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 268189 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268189 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268189' 00:23:16.012 killing process with pid 268189 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 268189 00:23:16.012 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 268189 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.271 11:17:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.802 00:23:18.802 real 0m6.682s 00:23:18.802 user 0m11.742s 00:23:18.802 sys 0m2.513s 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.802 ************************************ 00:23:18.802 END TEST nvmf_bdevio_no_huge 00:23:18.802 ************************************ 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:18.802 ************************************ 00:23:18.802 START TEST nvmf_tls 00:23:18.802 ************************************ 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.802 * Looking for test storage... 00:23:18.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.802 11:17:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.802 --rc genhtml_branch_coverage=1 00:23:18.802 --rc genhtml_function_coverage=1 00:23:18.802 --rc genhtml_legend=1 00:23:18.802 --rc geninfo_all_blocks=1 00:23:18.802 --rc geninfo_unexecuted_blocks=1 00:23:18.802 00:23:18.802 ' 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.802 --rc genhtml_branch_coverage=1 00:23:18.802 --rc genhtml_function_coverage=1 00:23:18.802 --rc genhtml_legend=1 00:23:18.802 --rc geninfo_all_blocks=1 00:23:18.802 --rc geninfo_unexecuted_blocks=1 00:23:18.802 00:23:18.802 ' 00:23:18.802 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.802 --rc genhtml_branch_coverage=1 00:23:18.802 --rc genhtml_function_coverage=1 00:23:18.802 --rc genhtml_legend=1 00:23:18.802 --rc geninfo_all_blocks=1 00:23:18.803 --rc geninfo_unexecuted_blocks=1 00:23:18.803 00:23:18.803 ' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.803 --rc genhtml_branch_coverage=1 00:23:18.803 --rc genhtml_function_coverage=1 00:23:18.803 --rc genhtml_legend=1 00:23:18.803 --rc geninfo_all_blocks=1 00:23:18.803 --rc geninfo_unexecuted_blocks=1 00:23:18.803 00:23:18.803 ' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.803 11:17:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:20.705 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:20.705 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:20.705 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:20.705 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.705 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.706 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.706 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.706 11:17:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:23:20.706 00:23:20.706 --- 10.0.0.2 ping statistics --- 00:23:20.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.706 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:23:20.706 00:23:20.706 --- 10.0.0.1 ping statistics --- 00:23:20.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.706 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=270407 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 270407 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 270407 ']' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.706 [2024-11-17 11:17:45.096439] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:20.706 [2024-11-17 11:17:45.096551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.706 [2024-11-17 11:17:45.175264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.706 [2024-11-17 11:17:45.225402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.706 [2024-11-17 11:17:45.225467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.706 [2024-11-17 11:17:45.225492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.706 [2024-11-17 11:17:45.225503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.706 [2024-11-17 11:17:45.225514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.706 [2024-11-17 11:17:45.226169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:20.706 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:20.965 true 00:23:20.965 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:20.965 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:21.223 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:21.223 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:21.223 11:17:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:21.789 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.789 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:21.789 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:21.789 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:21.789 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:22.048 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.048 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:22.306 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:22.306 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:22.306 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.306 11:17:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:22.872 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:22.872 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:22.872 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:23.130 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.130 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:23.388 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:23.388 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:23.388 11:17:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:23.647 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.647 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.bSwUKckQhQ 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.GCHPqIA7jc 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.bSwUKckQhQ 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.GCHPqIA7jc 00:23:23.905 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:24.471 11:17:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:24.732 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.bSwUKckQhQ 00:23:24.732 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bSwUKckQhQ 00:23:24.732 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.990 [2024-11-17 11:17:49.444030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.990 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.248 11:17:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.506 [2024-11-17 11:17:49.989580] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.506 [2024-11-17 11:17:49.989890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.506 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:25.764 malloc0 00:23:25.764 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.022 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bSwUKckQhQ 00:23:26.280 11:17:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.538 11:17:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bSwUKckQhQ 00:23:38.744 Initializing NVMe Controllers 00:23:38.744 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.744 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.744 Initialization complete. Launching workers. 00:23:38.744 ======================================================== 00:23:38.744 Latency(us) 00:23:38.744 Device Information : IOPS MiB/s Average min max 00:23:38.744 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8421.73 32.90 7601.63 1028.12 8621.70 00:23:38.744 ======================================================== 00:23:38.744 Total : 8421.73 32.90 7601.63 1028.12 8621.70 00:23:38.744 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bSwUKckQhQ 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bSwUKckQhQ 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=272410 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 272410 /var/tmp/bdevperf.sock 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 272410 ']' 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.744 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.745 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.745 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.745 [2024-11-17 11:18:01.290105] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:38.745 [2024-11-17 11:18:01.290199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid272410 ] 00:23:38.745 [2024-11-17 11:18:01.361830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.745 [2024-11-17 11:18:01.409728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.745 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.745 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.745 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bSwUKckQhQ 00:23:38.745 11:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.745 [2024-11-17 11:18:02.088905] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.745 TLSTESTn1 00:23:38.745 11:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.745 Running I/O for 10 seconds... 00:23:39.678 3332.00 IOPS, 13.02 MiB/s [2024-11-17T10:18:05.713Z] 3326.00 IOPS, 12.99 MiB/s [2024-11-17T10:18:06.646Z] 3325.67 IOPS, 12.99 MiB/s [2024-11-17T10:18:07.581Z] 3305.50 IOPS, 12.91 MiB/s [2024-11-17T10:18:08.515Z] 3313.00 IOPS, 12.94 MiB/s [2024-11-17T10:18:09.450Z] 3300.67 IOPS, 12.89 MiB/s [2024-11-17T10:18:10.382Z] 3320.14 IOPS, 12.97 MiB/s [2024-11-17T10:18:11.315Z] 3332.12 IOPS, 13.02 MiB/s [2024-11-17T10:18:12.690Z] 3339.00 IOPS, 13.04 MiB/s [2024-11-17T10:18:12.690Z] 3347.70 IOPS, 13.08 MiB/s 00:23:48.032 Latency(us) 00:23:48.032 [2024-11-17T10:18:12.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.032 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.032 Verification LBA range: start 0x0 length 0x2000 00:23:48.032 TLSTESTn1 : 10.04 3346.97 13.07 0.00 0.00 38154.26 9660.49 40001.23 00:23:48.032 [2024-11-17T10:18:12.690Z] =================================================================================================================== 00:23:48.032 [2024-11-17T10:18:12.690Z] Total : 3346.97 13.07 0.00 0.00 38154.26 9660.49 40001.23 00:23:48.032 { 00:23:48.032 "results": [ 00:23:48.032 { 00:23:48.032 "job": "TLSTESTn1", 00:23:48.032 "core_mask": "0x4", 00:23:48.032 "workload": "verify", 00:23:48.032 "status": "finished", 00:23:48.032 "verify_range": { 00:23:48.032 "start": 0, 00:23:48.032 "length": 8192 00:23:48.032 }, 00:23:48.032 "queue_depth": 128, 00:23:48.032 "io_size": 4096, 00:23:48.032 "runtime": 10.04041, 00:23:48.032 "iops": 3346.974874532016, 00:23:48.032 "mibps": 13.074120603640688, 00:23:48.032 "io_failed": 0, 00:23:48.032 "io_timeout": 0, 00:23:48.032 "avg_latency_us": 38154.26128501601, 00:23:48.032 "min_latency_us": 9660.491851851852, 00:23:48.032 "max_latency_us": 40001.23259259259 00:23:48.032 } 00:23:48.032 ], 00:23:48.032 "core_count": 1 00:23:48.032 } 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 272410 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 272410 ']' 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 272410 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 272410 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:48.032 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 272410' 00:23:48.033 killing process with pid 272410 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 272410 00:23:48.033 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.033 00:23:48.033 Latency(us) 00:23:48.033 [2024-11-17T10:18:12.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.033 [2024-11-17T10:18:12.691Z] =================================================================================================================== 00:23:48.033 [2024-11-17T10:18:12.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 272410 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCHPqIA7jc 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCHPqIA7jc 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GCHPqIA7jc 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GCHPqIA7jc 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274283 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274283 /var/tmp/bdevperf.sock 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274283 ']' 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.033 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.033 [2024-11-17 11:18:12.661438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:48.033 [2024-11-17 11:18:12.661535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274283 ] 00:23:48.291 [2024-11-17 11:18:12.729286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.291 [2024-11-17 11:18:12.773050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.291 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.291 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.292 11:18:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GCHPqIA7jc 00:23:48.550 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.808 [2024-11-17 11:18:13.419535] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.808 [2024-11-17 11:18:13.427920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:48.808 [2024-11-17 11:18:13.428704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedf370 (107): Transport endpoint is not connected 00:23:48.808 [2024-11-17 11:18:13.429696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedf370 (9): Bad file descriptor 00:23:48.808 [2024-11-17 11:18:13.430696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:48.808 [2024-11-17 11:18:13.430716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:48.808 [2024-11-17 11:18:13.430730] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:48.808 [2024-11-17 11:18:13.430749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:48.808 request: 00:23:48.808 { 00:23:48.808 "name": "TLSTEST", 00:23:48.808 "trtype": "tcp", 00:23:48.808 "traddr": "10.0.0.2", 00:23:48.808 "adrfam": "ipv4", 00:23:48.808 "trsvcid": "4420", 00:23:48.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.808 "prchk_reftag": false, 00:23:48.808 "prchk_guard": false, 00:23:48.808 "hdgst": false, 00:23:48.808 "ddgst": false, 00:23:48.808 "psk": "key0", 00:23:48.808 "allow_unrecognized_csi": false, 00:23:48.808 "method": "bdev_nvme_attach_controller", 00:23:48.808 "req_id": 1 00:23:48.808 } 00:23:48.808 Got JSON-RPC error response 00:23:48.808 response: 00:23:48.808 { 00:23:48.808 "code": -5, 00:23:48.808 "message": "Input/output error" 00:23:48.808 } 00:23:48.808 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274283 00:23:48.808 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274283 ']' 00:23:48.808 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274283 00:23:48.808 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:48.808 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.808 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274283 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274283' 00:23:49.067 killing process with pid 274283 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274283 00:23:49.067 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.067 00:23:49.067 Latency(us) 00:23:49.067 [2024-11-17T10:18:13.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.067 [2024-11-17T10:18:13.725Z] =================================================================================================================== 00:23:49.067 [2024-11-17T10:18:13.725Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274283 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bSwUKckQhQ 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bSwUKckQhQ 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bSwUKckQhQ 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bSwUKckQhQ 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274423 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274423 /var/tmp/bdevperf.sock 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274423 ']' 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.067 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.067 [2024-11-17 11:18:13.689316] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:49.067 [2024-11-17 11:18:13.689405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274423 ] 00:23:49.326 [2024-11-17 11:18:13.757191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.326 [2024-11-17 11:18:13.804637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.326 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.326 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.326 11:18:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bSwUKckQhQ 00:23:49.584 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:49.842 [2024-11-17 11:18:14.462470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.842 [2024-11-17 11:18:14.470501] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:49.842 [2024-11-17 11:18:14.470553] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:49.842 [2024-11-17 11:18:14.470593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:49.842 [2024-11-17 11:18:14.470699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1781370 (107): Transport endpoint is not connected 00:23:49.842 [2024-11-17 11:18:14.471688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1781370 (9): Bad file descriptor 00:23:49.842 [2024-11-17 11:18:14.472687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:49.842 [2024-11-17 11:18:14.472709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:49.842 [2024-11-17 11:18:14.472722] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:49.842 [2024-11-17 11:18:14.472742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:49.842 request: 00:23:49.842 { 00:23:49.842 "name": "TLSTEST", 00:23:49.842 "trtype": "tcp", 00:23:49.842 "traddr": "10.0.0.2", 00:23:49.842 "adrfam": "ipv4", 00:23:49.842 "trsvcid": "4420", 00:23:49.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.842 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:49.842 "prchk_reftag": false, 00:23:49.842 "prchk_guard": false, 00:23:49.842 "hdgst": false, 00:23:49.842 "ddgst": false, 00:23:49.842 "psk": "key0", 00:23:49.842 "allow_unrecognized_csi": false, 00:23:49.842 "method": "bdev_nvme_attach_controller", 00:23:49.842 "req_id": 1 00:23:49.842 } 00:23:49.842 Got JSON-RPC error response 00:23:49.842 response: 00:23:49.842 { 00:23:49.842 "code": -5, 00:23:49.842 "message": "Input/output error" 00:23:49.842 } 00:23:49.842 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274423 00:23:49.842 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274423 ']' 00:23:49.842 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274423 00:23:49.842 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.842 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.100 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274423 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274423' 00:23:50.101 killing process with pid 274423 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274423 00:23:50.101 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.101 00:23:50.101 Latency(us) 00:23:50.101 [2024-11-17T10:18:14.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.101 [2024-11-17T10:18:14.759Z] =================================================================================================================== 00:23:50.101 [2024-11-17T10:18:14.759Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274423 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bSwUKckQhQ 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bSwUKckQhQ 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bSwUKckQhQ 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bSwUKckQhQ 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274564 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274564 /var/tmp/bdevperf.sock 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274564 ']' 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.101 11:18:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.359 [2024-11-17 11:18:14.774723] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:50.359 [2024-11-17 11:18:14.774805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274564 ] 00:23:50.359 [2024-11-17 11:18:14.842592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.359 [2024-11-17 11:18:14.890867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.617 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.617 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.617 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bSwUKckQhQ 00:23:50.876 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.134 [2024-11-17 11:18:15.563167] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.134 [2024-11-17 11:18:15.573366] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:51.134 [2024-11-17 11:18:15.573396] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:51.134 [2024-11-17 11:18:15.573430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:51.134 [2024-11-17 11:18:15.573496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196f370 (107): Transport endpoint is not connected 00:23:51.134 [2024-11-17 11:18:15.574487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196f370 (9): Bad file descriptor 00:23:51.134 [2024-11-17 11:18:15.575487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:51.134 [2024-11-17 11:18:15.575510] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:51.134 [2024-11-17 11:18:15.575529] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:51.134 [2024-11-17 11:18:15.575565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:51.134 request: 00:23:51.134 { 00:23:51.134 "name": "TLSTEST", 00:23:51.134 "trtype": "tcp", 00:23:51.134 "traddr": "10.0.0.2", 00:23:51.134 "adrfam": "ipv4", 00:23:51.134 "trsvcid": "4420", 00:23:51.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.134 "prchk_reftag": false, 00:23:51.134 "prchk_guard": false, 00:23:51.134 "hdgst": false, 00:23:51.135 "ddgst": false, 00:23:51.135 "psk": "key0", 00:23:51.135 "allow_unrecognized_csi": false, 00:23:51.135 "method": "bdev_nvme_attach_controller", 00:23:51.135 "req_id": 1 00:23:51.135 } 00:23:51.135 Got JSON-RPC error response 00:23:51.135 response: 00:23:51.135 { 00:23:51.135 "code": -5, 00:23:51.135 "message": "Input/output error" 00:23:51.135 } 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274564 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274564 ']' 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274564 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274564 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274564' 00:23:51.135 killing process with pid 274564 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274564 00:23:51.135 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.135 00:23:51.135 Latency(us) 00:23:51.135 [2024-11-17T10:18:15.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.135 [2024-11-17T10:18:15.793Z] =================================================================================================================== 00:23:51.135 [2024-11-17T10:18:15.793Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.135 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274564 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274710 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274710 /var/tmp/bdevperf.sock 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274710 ']' 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.393 11:18:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.393 [2024-11-17 11:18:15.870998] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:51.394 [2024-11-17 11:18:15.871085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274710 ] 00:23:51.394 [2024-11-17 11:18:15.936471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.394 [2024-11-17 11:18:15.978572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.652 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.652 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:51.652 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:51.910 [2024-11-17 11:18:16.352993] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:51.910 [2024-11-17 11:18:16.353033] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:51.910 request: 00:23:51.910 { 00:23:51.910 "name": "key0", 00:23:51.910 "path": "", 00:23:51.910 "method": "keyring_file_add_key", 00:23:51.910 "req_id": 1 00:23:51.910 } 00:23:51.910 Got JSON-RPC error response 00:23:51.910 response: 00:23:51.910 { 00:23:51.910 "code": -1, 00:23:51.910 "message": "Operation not permitted" 00:23:51.910 } 00:23:51.910 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.168 [2024-11-17 11:18:16.629843] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.168 [2024-11-17 11:18:16.629901] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:52.168 request: 00:23:52.168 { 00:23:52.168 "name": "TLSTEST", 00:23:52.168 "trtype": "tcp", 00:23:52.168 "traddr": "10.0.0.2", 00:23:52.168 "adrfam": "ipv4", 00:23:52.168 "trsvcid": "4420", 00:23:52.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.168 "prchk_reftag": false, 00:23:52.168 "prchk_guard": false, 00:23:52.168 "hdgst": false, 00:23:52.168 "ddgst": false, 00:23:52.168 "psk": "key0", 00:23:52.168 "allow_unrecognized_csi": false, 00:23:52.168 "method": "bdev_nvme_attach_controller", 00:23:52.168 "req_id": 1 00:23:52.168 } 00:23:52.168 Got JSON-RPC error response 00:23:52.168 response: 00:23:52.168 { 00:23:52.168 "code": -126, 00:23:52.168 "message": "Required key not available" 00:23:52.168 } 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274710 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274710 ']' 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274710 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274710 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274710' 00:23:52.168 killing process with pid 274710 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274710 00:23:52.168 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.168 00:23:52.168 Latency(us) 00:23:52.168 [2024-11-17T10:18:16.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.168 [2024-11-17T10:18:16.826Z] =================================================================================================================== 00:23:52.168 [2024-11-17T10:18:16.826Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.168 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274710 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 270407 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 270407 ']' 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 270407 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270407 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270407' 00:23:52.429 killing process with pid 270407 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 270407 00:23:52.429 11:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 270407 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:52.429 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.3uUOT2XQVf 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.3uUOT2XQVf 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=274862 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 274862 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274862 ']' 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.689 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.689 [2024-11-17 11:18:17.159132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:52.689 [2024-11-17 11:18:17.159215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.689 [2024-11-17 11:18:17.229560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.689 [2024-11-17 11:18:17.274143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.689 [2024-11-17 11:18:17.274211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.689 [2024-11-17 11:18:17.274224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.689 [2024-11-17 11:18:17.274235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.689 [2024-11-17 11:18:17.274245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.689 [2024-11-17 11:18:17.274871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.3uUOT2XQVf 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3uUOT2XQVf 00:23:52.948 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.207 [2024-11-17 11:18:17.672203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.207 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:53.466 11:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:53.724 [2024-11-17 11:18:18.209679] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.724 [2024-11-17 11:18:18.209986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.724 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:53.983 malloc0 00:23:53.983 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.241 11:18:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:23:54.499 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3uUOT2XQVf 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3uUOT2XQVf 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275146 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275146 /var/tmp/bdevperf.sock 00:23:54.763 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275146 ']' 00:23:54.764 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.764 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.764 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.764 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.764 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.764 [2024-11-17 11:18:19.377961] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:23:54.764 [2024-11-17 11:18:19.378035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275146 ] 00:23:55.029 [2024-11-17 11:18:19.443800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.029 [2024-11-17 11:18:19.488748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.029 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.029 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.029 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:23:55.595 11:18:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.595 [2024-11-17 11:18:20.225141] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.852 TLSTESTn1 00:23:55.852 11:18:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.852 Running I/O for 10 seconds... 00:23:58.158 3282.00 IOPS, 12.82 MiB/s [2024-11-17T10:18:23.749Z] 3323.00 IOPS, 12.98 MiB/s [2024-11-17T10:18:24.682Z] 3324.33 IOPS, 12.99 MiB/s [2024-11-17T10:18:25.615Z] 3334.50 IOPS, 13.03 MiB/s [2024-11-17T10:18:26.549Z] 3329.80 IOPS, 13.01 MiB/s [2024-11-17T10:18:27.482Z] 3338.50 IOPS, 13.04 MiB/s [2024-11-17T10:18:28.854Z] 3317.86 IOPS, 12.96 MiB/s [2024-11-17T10:18:29.786Z] 3341.50 IOPS, 13.05 MiB/s [2024-11-17T10:18:30.721Z] 3345.56 IOPS, 13.07 MiB/s [2024-11-17T10:18:30.721Z] 3351.70 IOPS, 13.09 MiB/s 00:24:06.063 Latency(us) 00:24:06.063 [2024-11-17T10:18:30.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.063 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.063 Verification LBA range: start 0x0 length 0x2000 00:24:06.063 TLSTESTn1 : 10.02 3357.71 13.12 0.00 0.00 38060.17 6310.87 33010.73 00:24:06.063 [2024-11-17T10:18:30.721Z] =================================================================================================================== 00:24:06.063 [2024-11-17T10:18:30.721Z] Total : 3357.71 13.12 0.00 0.00 38060.17 6310.87 33010.73 00:24:06.063 { 00:24:06.063 "results": [ 00:24:06.063 { 00:24:06.063 "job": "TLSTESTn1", 00:24:06.063 "core_mask": "0x4", 00:24:06.063 "workload": "verify", 00:24:06.063 "status": "finished", 00:24:06.063 "verify_range": { 00:24:06.063 "start": 0, 00:24:06.063 "length": 8192 00:24:06.063 }, 00:24:06.063 "queue_depth": 128, 00:24:06.063 "io_size": 4096, 00:24:06.063 "runtime": 10.019919, 00:24:06.063 "iops": 3357.7117739175337, 00:24:06.063 "mibps": 13.116061616865366, 00:24:06.063 "io_failed": 0, 00:24:06.063 "io_timeout": 0, 00:24:06.063 "avg_latency_us": 38060.1675611743, 00:24:06.063 "min_latency_us": 6310.874074074074, 00:24:06.063 "max_latency_us": 33010.72592592592 00:24:06.063 } 00:24:06.063 ], 00:24:06.064 "core_count": 1 00:24:06.064 } 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 275146 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275146 ']' 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275146 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275146 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275146' 00:24:06.064 killing process with pid 275146 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275146 00:24:06.064 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.064 00:24:06.064 Latency(us) 00:24:06.064 [2024-11-17T10:18:30.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.064 [2024-11-17T10:18:30.722Z] =================================================================================================================== 00:24:06.064 [2024-11-17T10:18:30.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275146 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.3uUOT2XQVf 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3uUOT2XQVf 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3uUOT2XQVf 00:24:06.064 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3uUOT2XQVf 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.3uUOT2XQVf 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=276462 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 276462 /var/tmp/bdevperf.sock 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 276462 ']' 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.323 11:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.323 [2024-11-17 11:18:30.769007] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:06.323 [2024-11-17 11:18:30.769095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid276462 ] 00:24:06.323 [2024-11-17 11:18:30.835411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.323 [2024-11-17 11:18:30.880141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.582 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.582 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.582 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:06.840 [2024-11-17 11:18:31.265822] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3uUOT2XQVf': 0100666 00:24:06.840 [2024-11-17 11:18:31.265863] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:06.840 request: 00:24:06.840 { 00:24:06.840 "name": "key0", 00:24:06.840 "path": "/tmp/tmp.3uUOT2XQVf", 00:24:06.840 "method": "keyring_file_add_key", 00:24:06.840 "req_id": 1 00:24:06.840 } 00:24:06.840 Got JSON-RPC error response 00:24:06.840 response: 00:24:06.840 { 00:24:06.840 "code": -1, 00:24:06.840 "message": "Operation not permitted" 00:24:06.840 } 00:24:06.840 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.098 [2024-11-17 11:18:31.546724] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.098 [2024-11-17 11:18:31.546790] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:07.098 request: 00:24:07.098 { 00:24:07.098 "name": "TLSTEST", 00:24:07.098 "trtype": "tcp", 00:24:07.098 "traddr": "10.0.0.2", 00:24:07.098 "adrfam": "ipv4", 00:24:07.098 "trsvcid": "4420", 00:24:07.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.098 "prchk_reftag": false, 00:24:07.098 "prchk_guard": false, 00:24:07.098 "hdgst": false, 00:24:07.098 "ddgst": false, 00:24:07.098 "psk": "key0", 00:24:07.098 "allow_unrecognized_csi": false, 00:24:07.098 "method": "bdev_nvme_attach_controller", 00:24:07.098 "req_id": 1 00:24:07.098 } 00:24:07.098 Got JSON-RPC error response 00:24:07.098 response: 00:24:07.098 { 00:24:07.098 "code": -126, 00:24:07.098 "message": "Required key not available" 00:24:07.098 } 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 276462 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 276462 ']' 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 276462 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276462 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276462' 00:24:07.098 killing process with pid 276462 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 276462 00:24:07.098 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.098 00:24:07.098 Latency(us) 00:24:07.098 [2024-11-17T10:18:31.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.098 [2024-11-17T10:18:31.756Z] =================================================================================================================== 00:24:07.098 [2024-11-17T10:18:31.756Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.098 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 276462 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 274862 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274862 ']' 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274862 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274862 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274862' 00:24:07.356 killing process with pid 274862 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274862 00:24:07.356 11:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274862 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=276615 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 276615 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 276615 ']' 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.615 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.615 [2024-11-17 11:18:32.093674] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:07.615 [2024-11-17 11:18:32.093756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.615 [2024-11-17 11:18:32.163782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.615 [2024-11-17 11:18:32.211742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.615 [2024-11-17 11:18:32.211816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.615 [2024-11-17 11:18:32.211831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.615 [2024-11-17 11:18:32.211842] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.615 [2024-11-17 11:18:32.211852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.615 [2024-11-17 11:18:32.212432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.3uUOT2XQVf 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.3uUOT2XQVf 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.3uUOT2XQVf 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3uUOT2XQVf 00:24:07.874 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:08.133 [2024-11-17 11:18:32.611928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.133 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:08.391 11:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:08.649 [2024-11-17 11:18:33.137341] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.649 [2024-11-17 11:18:33.137637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.649 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:08.908 malloc0 00:24:08.908 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:09.166 11:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:09.424 [2024-11-17 11:18:33.998296] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3uUOT2XQVf': 0100666 00:24:09.424 [2024-11-17 11:18:33.998347] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:09.424 request: 00:24:09.424 { 00:24:09.424 "name": "key0", 00:24:09.424 "path": "/tmp/tmp.3uUOT2XQVf", 00:24:09.424 "method": "keyring_file_add_key", 00:24:09.424 "req_id": 1 00:24:09.424 } 00:24:09.424 Got JSON-RPC error response 00:24:09.424 response: 00:24:09.424 { 00:24:09.424 "code": -1, 00:24:09.424 "message": "Operation not permitted" 00:24:09.424 } 00:24:09.424 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:09.682 [2024-11-17 11:18:34.323189] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:09.682 [2024-11-17 11:18:34.323245] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:09.682 request: 00:24:09.682 { 00:24:09.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.682 "host": "nqn.2016-06.io.spdk:host1", 00:24:09.682 "psk": "key0", 00:24:09.682 "method": "nvmf_subsystem_add_host", 00:24:09.682 "req_id": 1 00:24:09.682 } 00:24:09.682 Got JSON-RPC error response 00:24:09.682 response: 00:24:09.682 { 00:24:09.682 "code": -32603, 00:24:09.682 "message": "Internal error" 00:24:09.682 } 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 276615 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 276615 ']' 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 276615 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276615 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276615' 00:24:09.940 killing process with pid 276615 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 276615 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 276615 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.3uUOT2XQVf 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.940 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277028 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277028 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277028 ']' 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.198 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.198 [2024-11-17 11:18:34.648178] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:10.198 [2024-11-17 11:18:34.648254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.198 [2024-11-17 11:18:34.720700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.198 [2024-11-17 11:18:34.765721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.198 [2024-11-17 11:18:34.765782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.199 [2024-11-17 11:18:34.765796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.199 [2024-11-17 11:18:34.765807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.199 [2024-11-17 11:18:34.765817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.199 [2024-11-17 11:18:34.766366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.3uUOT2XQVf 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3uUOT2XQVf 00:24:10.457 11:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:10.715 [2024-11-17 11:18:35.150647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.715 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:10.972 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:11.230 [2024-11-17 11:18:35.732140] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.230 [2024-11-17 11:18:35.732392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.230 11:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:11.488 malloc0 00:24:11.488 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:11.746 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:12.004 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=277313 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 277313 /var/tmp/bdevperf.sock 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277313 ']' 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.262 11:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.521 [2024-11-17 11:18:36.930367] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:12.521 [2024-11-17 11:18:36.930442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277313 ] 00:24:12.521 [2024-11-17 11:18:36.997174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.521 [2024-11-17 11:18:37.042360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.521 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.521 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:12.521 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:12.779 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.036 [2024-11-17 11:18:37.671106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.294 TLSTESTn1 00:24:13.294 11:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:13.552 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:13.552 "subsystems": [ 00:24:13.552 { 00:24:13.552 "subsystem": "keyring", 00:24:13.552 "config": [ 00:24:13.552 { 00:24:13.552 "method": "keyring_file_add_key", 00:24:13.552 "params": { 00:24:13.552 "name": "key0", 00:24:13.552 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:13.552 } 00:24:13.552 } 00:24:13.552 ] 00:24:13.552 }, 00:24:13.552 { 00:24:13.552 "subsystem": "iobuf", 00:24:13.552 "config": [ 00:24:13.552 { 00:24:13.552 "method": "iobuf_set_options", 00:24:13.552 "params": { 00:24:13.552 "small_pool_count": 8192, 00:24:13.552 "large_pool_count": 1024, 00:24:13.552 "small_bufsize": 8192, 00:24:13.552 "large_bufsize": 135168, 00:24:13.552 "enable_numa": false 00:24:13.552 } 00:24:13.552 } 00:24:13.552 ] 00:24:13.552 }, 00:24:13.552 { 00:24:13.552 "subsystem": "sock", 00:24:13.552 "config": [ 00:24:13.552 { 00:24:13.552 "method": "sock_set_default_impl", 00:24:13.552 "params": { 00:24:13.552 "impl_name": "posix" 00:24:13.552 } 00:24:13.552 }, 00:24:13.552 { 00:24:13.552 "method": "sock_impl_set_options", 00:24:13.552 "params": { 00:24:13.552 "impl_name": "ssl", 00:24:13.552 "recv_buf_size": 4096, 00:24:13.552 "send_buf_size": 4096, 00:24:13.552 "enable_recv_pipe": true, 00:24:13.552 "enable_quickack": false, 00:24:13.552 "enable_placement_id": 0, 00:24:13.552 "enable_zerocopy_send_server": true, 00:24:13.552 "enable_zerocopy_send_client": false, 00:24:13.552 "zerocopy_threshold": 0, 00:24:13.552 "tls_version": 0, 00:24:13.552 "enable_ktls": false 00:24:13.552 } 00:24:13.552 }, 00:24:13.552 { 00:24:13.552 "method": "sock_impl_set_options", 00:24:13.552 "params": { 00:24:13.552 "impl_name": "posix", 00:24:13.553 "recv_buf_size": 2097152, 00:24:13.553 "send_buf_size": 2097152, 00:24:13.553 "enable_recv_pipe": true, 00:24:13.553 "enable_quickack": false, 00:24:13.553 "enable_placement_id": 0, 00:24:13.553 "enable_zerocopy_send_server": true, 00:24:13.553 "enable_zerocopy_send_client": false, 00:24:13.553 "zerocopy_threshold": 0, 00:24:13.553 "tls_version": 0, 00:24:13.553 "enable_ktls": false 00:24:13.553 } 00:24:13.553 } 00:24:13.553 ] 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "subsystem": "vmd", 00:24:13.553 "config": [] 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "subsystem": "accel", 00:24:13.553 "config": [ 00:24:13.553 { 00:24:13.553 "method": "accel_set_options", 00:24:13.553 "params": { 00:24:13.553 "small_cache_size": 128, 00:24:13.553 "large_cache_size": 16, 00:24:13.553 "task_count": 2048, 00:24:13.553 "sequence_count": 2048, 00:24:13.553 "buf_count": 2048 00:24:13.553 } 00:24:13.553 } 00:24:13.553 ] 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "subsystem": "bdev", 00:24:13.553 "config": [ 00:24:13.553 { 00:24:13.553 "method": "bdev_set_options", 00:24:13.553 "params": { 00:24:13.553 "bdev_io_pool_size": 65535, 00:24:13.553 "bdev_io_cache_size": 256, 00:24:13.553 "bdev_auto_examine": true, 00:24:13.553 "iobuf_small_cache_size": 128, 00:24:13.553 "iobuf_large_cache_size": 16 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "bdev_raid_set_options", 00:24:13.553 "params": { 00:24:13.553 "process_window_size_kb": 1024, 00:24:13.553 "process_max_bandwidth_mb_sec": 0 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "bdev_iscsi_set_options", 00:24:13.553 "params": { 00:24:13.553 "timeout_sec": 30 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "bdev_nvme_set_options", 00:24:13.553 "params": { 00:24:13.553 "action_on_timeout": "none", 00:24:13.553 "timeout_us": 0, 00:24:13.553 "timeout_admin_us": 0, 00:24:13.553 "keep_alive_timeout_ms": 10000, 00:24:13.553 "arbitration_burst": 0, 00:24:13.553 "low_priority_weight": 0, 00:24:13.553 "medium_priority_weight": 0, 00:24:13.553 "high_priority_weight": 0, 00:24:13.553 "nvme_adminq_poll_period_us": 10000, 00:24:13.553 "nvme_ioq_poll_period_us": 0, 00:24:13.553 "io_queue_requests": 0, 00:24:13.553 "delay_cmd_submit": true, 00:24:13.553 "transport_retry_count": 4, 00:24:13.553 "bdev_retry_count": 3, 00:24:13.553 "transport_ack_timeout": 0, 00:24:13.553 "ctrlr_loss_timeout_sec": 0, 00:24:13.553 "reconnect_delay_sec": 0, 00:24:13.553 "fast_io_fail_timeout_sec": 0, 00:24:13.553 "disable_auto_failback": false, 00:24:13.553 "generate_uuids": false, 00:24:13.553 "transport_tos": 0, 00:24:13.553 "nvme_error_stat": false, 00:24:13.553 "rdma_srq_size": 0, 00:24:13.553 "io_path_stat": false, 00:24:13.553 "allow_accel_sequence": false, 00:24:13.553 "rdma_max_cq_size": 0, 00:24:13.553 "rdma_cm_event_timeout_ms": 0, 00:24:13.553 "dhchap_digests": [ 00:24:13.553 "sha256", 00:24:13.553 "sha384", 00:24:13.553 "sha512" 00:24:13.553 ], 00:24:13.553 "dhchap_dhgroups": [ 00:24:13.553 "null", 00:24:13.553 "ffdhe2048", 00:24:13.553 "ffdhe3072", 00:24:13.553 "ffdhe4096", 00:24:13.553 "ffdhe6144", 00:24:13.553 "ffdhe8192" 00:24:13.553 ] 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "bdev_nvme_set_hotplug", 00:24:13.553 "params": { 00:24:13.553 "period_us": 100000, 00:24:13.553 "enable": false 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "bdev_malloc_create", 00:24:13.553 "params": { 00:24:13.553 "name": "malloc0", 00:24:13.553 "num_blocks": 8192, 00:24:13.553 "block_size": 4096, 00:24:13.553 "physical_block_size": 4096, 00:24:13.553 "uuid": "b48756b2-6672-428a-8832-abf0a4eae762", 00:24:13.553 "optimal_io_boundary": 0, 00:24:13.553 "md_size": 0, 00:24:13.553 "dif_type": 0, 00:24:13.553 "dif_is_head_of_md": false, 00:24:13.553 "dif_pi_format": 0 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "bdev_wait_for_examine" 00:24:13.553 } 00:24:13.553 ] 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "subsystem": "nbd", 00:24:13.553 "config": [] 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "subsystem": "scheduler", 00:24:13.553 "config": [ 00:24:13.553 { 00:24:13.553 "method": "framework_set_scheduler", 00:24:13.553 "params": { 00:24:13.553 "name": "static" 00:24:13.553 } 00:24:13.553 } 00:24:13.553 ] 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "subsystem": "nvmf", 00:24:13.553 "config": [ 00:24:13.553 { 00:24:13.553 "method": "nvmf_set_config", 00:24:13.553 "params": { 00:24:13.553 "discovery_filter": "match_any", 00:24:13.553 "admin_cmd_passthru": { 00:24:13.553 "identify_ctrlr": false 00:24:13.553 }, 00:24:13.553 "dhchap_digests": [ 00:24:13.553 "sha256", 00:24:13.553 "sha384", 00:24:13.553 "sha512" 00:24:13.553 ], 00:24:13.553 "dhchap_dhgroups": [ 00:24:13.553 "null", 00:24:13.553 "ffdhe2048", 00:24:13.553 "ffdhe3072", 00:24:13.553 "ffdhe4096", 00:24:13.553 "ffdhe6144", 00:24:13.553 "ffdhe8192" 00:24:13.553 ] 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "nvmf_set_max_subsystems", 00:24:13.553 "params": { 00:24:13.553 "max_subsystems": 1024 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "nvmf_set_crdt", 00:24:13.553 "params": { 00:24:13.553 "crdt1": 0, 00:24:13.553 "crdt2": 0, 00:24:13.553 "crdt3": 0 00:24:13.553 } 00:24:13.553 }, 00:24:13.553 { 00:24:13.553 "method": "nvmf_create_transport", 00:24:13.553 "params": { 00:24:13.553 "trtype": "TCP", 00:24:13.553 "max_queue_depth": 128, 00:24:13.554 "max_io_qpairs_per_ctrlr": 127, 00:24:13.554 "in_capsule_data_size": 4096, 00:24:13.554 "max_io_size": 131072, 00:24:13.554 "io_unit_size": 131072, 00:24:13.554 "max_aq_depth": 128, 00:24:13.554 "num_shared_buffers": 511, 00:24:13.554 "buf_cache_size": 4294967295, 00:24:13.554 "dif_insert_or_strip": false, 00:24:13.554 "zcopy": false, 00:24:13.554 "c2h_success": false, 00:24:13.554 "sock_priority": 0, 00:24:13.554 "abort_timeout_sec": 1, 00:24:13.554 "ack_timeout": 0, 00:24:13.554 "data_wr_pool_size": 0 00:24:13.554 } 00:24:13.554 }, 00:24:13.554 { 00:24:13.554 "method": "nvmf_create_subsystem", 00:24:13.554 "params": { 00:24:13.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.554 "allow_any_host": false, 00:24:13.554 "serial_number": "SPDK00000000000001", 00:24:13.554 "model_number": "SPDK bdev Controller", 00:24:13.554 "max_namespaces": 10, 00:24:13.554 "min_cntlid": 1, 00:24:13.554 "max_cntlid": 65519, 00:24:13.554 "ana_reporting": false 00:24:13.554 } 00:24:13.554 }, 00:24:13.554 { 00:24:13.554 "method": "nvmf_subsystem_add_host", 00:24:13.554 "params": { 00:24:13.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.554 "host": "nqn.2016-06.io.spdk:host1", 00:24:13.554 "psk": "key0" 00:24:13.554 } 00:24:13.554 }, 00:24:13.554 { 00:24:13.554 "method": "nvmf_subsystem_add_ns", 00:24:13.554 "params": { 00:24:13.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.554 "namespace": { 00:24:13.554 "nsid": 1, 00:24:13.554 "bdev_name": "malloc0", 00:24:13.554 "nguid": "B48756B26672428A8832ABF0A4EAE762", 00:24:13.554 "uuid": "b48756b2-6672-428a-8832-abf0a4eae762", 00:24:13.554 "no_auto_visible": false 00:24:13.554 } 00:24:13.554 } 00:24:13.554 }, 00:24:13.554 { 00:24:13.554 "method": "nvmf_subsystem_add_listener", 00:24:13.554 "params": { 00:24:13.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.554 "listen_address": { 00:24:13.554 "trtype": "TCP", 00:24:13.554 "adrfam": "IPv4", 00:24:13.554 "traddr": "10.0.0.2", 00:24:13.554 "trsvcid": "4420" 00:24:13.554 }, 00:24:13.554 "secure_channel": true 00:24:13.554 } 00:24:13.554 } 00:24:13.554 ] 00:24:13.554 } 00:24:13.554 ] 00:24:13.554 }' 00:24:13.554 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:14.120 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:14.120 "subsystems": [ 00:24:14.120 { 00:24:14.120 "subsystem": "keyring", 00:24:14.120 "config": [ 00:24:14.120 { 00:24:14.120 "method": "keyring_file_add_key", 00:24:14.120 "params": { 00:24:14.120 "name": "key0", 00:24:14.120 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:14.120 } 00:24:14.120 } 00:24:14.120 ] 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "subsystem": "iobuf", 00:24:14.120 "config": [ 00:24:14.120 { 00:24:14.120 "method": "iobuf_set_options", 00:24:14.120 "params": { 00:24:14.120 "small_pool_count": 8192, 00:24:14.120 "large_pool_count": 1024, 00:24:14.120 "small_bufsize": 8192, 00:24:14.120 "large_bufsize": 135168, 00:24:14.120 "enable_numa": false 00:24:14.120 } 00:24:14.120 } 00:24:14.120 ] 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "subsystem": "sock", 00:24:14.120 "config": [ 00:24:14.120 { 00:24:14.120 "method": "sock_set_default_impl", 00:24:14.120 "params": { 00:24:14.120 "impl_name": "posix" 00:24:14.120 } 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "method": "sock_impl_set_options", 00:24:14.120 "params": { 00:24:14.120 "impl_name": "ssl", 00:24:14.120 "recv_buf_size": 4096, 00:24:14.120 "send_buf_size": 4096, 00:24:14.120 "enable_recv_pipe": true, 00:24:14.120 "enable_quickack": false, 00:24:14.120 "enable_placement_id": 0, 00:24:14.120 "enable_zerocopy_send_server": true, 00:24:14.120 "enable_zerocopy_send_client": false, 00:24:14.120 "zerocopy_threshold": 0, 00:24:14.120 "tls_version": 0, 00:24:14.120 "enable_ktls": false 00:24:14.120 } 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "method": "sock_impl_set_options", 00:24:14.120 "params": { 00:24:14.120 "impl_name": "posix", 00:24:14.120 "recv_buf_size": 2097152, 00:24:14.120 "send_buf_size": 2097152, 00:24:14.120 "enable_recv_pipe": true, 00:24:14.120 "enable_quickack": false, 00:24:14.120 "enable_placement_id": 0, 00:24:14.120 "enable_zerocopy_send_server": true, 00:24:14.120 "enable_zerocopy_send_client": false, 00:24:14.120 "zerocopy_threshold": 0, 00:24:14.120 "tls_version": 0, 00:24:14.120 "enable_ktls": false 00:24:14.120 } 00:24:14.120 } 00:24:14.120 ] 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "subsystem": "vmd", 00:24:14.120 "config": [] 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "subsystem": "accel", 00:24:14.120 "config": [ 00:24:14.120 { 00:24:14.120 "method": "accel_set_options", 00:24:14.120 "params": { 00:24:14.120 "small_cache_size": 128, 00:24:14.120 "large_cache_size": 16, 00:24:14.120 "task_count": 2048, 00:24:14.120 "sequence_count": 2048, 00:24:14.120 "buf_count": 2048 00:24:14.120 } 00:24:14.120 } 00:24:14.120 ] 00:24:14.120 }, 00:24:14.120 { 00:24:14.120 "subsystem": "bdev", 00:24:14.120 "config": [ 00:24:14.120 { 00:24:14.120 "method": "bdev_set_options", 00:24:14.120 "params": { 00:24:14.120 "bdev_io_pool_size": 65535, 00:24:14.120 "bdev_io_cache_size": 256, 00:24:14.120 "bdev_auto_examine": true, 00:24:14.120 "iobuf_small_cache_size": 128, 00:24:14.121 "iobuf_large_cache_size": 16 00:24:14.121 } 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "method": "bdev_raid_set_options", 00:24:14.121 "params": { 00:24:14.121 "process_window_size_kb": 1024, 00:24:14.121 "process_max_bandwidth_mb_sec": 0 00:24:14.121 } 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "method": "bdev_iscsi_set_options", 00:24:14.121 "params": { 00:24:14.121 "timeout_sec": 30 00:24:14.121 } 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "method": "bdev_nvme_set_options", 00:24:14.121 "params": { 00:24:14.121 "action_on_timeout": "none", 00:24:14.121 "timeout_us": 0, 00:24:14.121 "timeout_admin_us": 0, 00:24:14.121 "keep_alive_timeout_ms": 10000, 00:24:14.121 "arbitration_burst": 0, 00:24:14.121 "low_priority_weight": 0, 00:24:14.121 "medium_priority_weight": 0, 00:24:14.121 "high_priority_weight": 0, 00:24:14.121 "nvme_adminq_poll_period_us": 10000, 00:24:14.121 "nvme_ioq_poll_period_us": 0, 00:24:14.121 "io_queue_requests": 512, 00:24:14.121 "delay_cmd_submit": true, 00:24:14.121 "transport_retry_count": 4, 00:24:14.121 "bdev_retry_count": 3, 00:24:14.121 "transport_ack_timeout": 0, 00:24:14.121 "ctrlr_loss_timeout_sec": 0, 00:24:14.121 "reconnect_delay_sec": 0, 00:24:14.121 "fast_io_fail_timeout_sec": 0, 00:24:14.121 "disable_auto_failback": false, 00:24:14.121 "generate_uuids": false, 00:24:14.121 "transport_tos": 0, 00:24:14.121 "nvme_error_stat": false, 00:24:14.121 "rdma_srq_size": 0, 00:24:14.121 "io_path_stat": false, 00:24:14.121 "allow_accel_sequence": false, 00:24:14.121 "rdma_max_cq_size": 0, 00:24:14.121 "rdma_cm_event_timeout_ms": 0, 00:24:14.121 "dhchap_digests": [ 00:24:14.121 "sha256", 00:24:14.121 "sha384", 00:24:14.121 "sha512" 00:24:14.121 ], 00:24:14.121 "dhchap_dhgroups": [ 00:24:14.121 "null", 00:24:14.121 "ffdhe2048", 00:24:14.121 "ffdhe3072", 00:24:14.121 "ffdhe4096", 00:24:14.121 "ffdhe6144", 00:24:14.121 "ffdhe8192" 00:24:14.121 ] 00:24:14.121 } 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "method": "bdev_nvme_attach_controller", 00:24:14.121 "params": { 00:24:14.121 "name": "TLSTEST", 00:24:14.121 "trtype": "TCP", 00:24:14.121 "adrfam": "IPv4", 00:24:14.121 "traddr": "10.0.0.2", 00:24:14.121 "trsvcid": "4420", 00:24:14.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.121 "prchk_reftag": false, 00:24:14.121 "prchk_guard": false, 00:24:14.121 "ctrlr_loss_timeout_sec": 0, 00:24:14.121 "reconnect_delay_sec": 0, 00:24:14.121 "fast_io_fail_timeout_sec": 0, 00:24:14.121 "psk": "key0", 00:24:14.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.121 "hdgst": false, 00:24:14.121 "ddgst": false, 00:24:14.121 "multipath": "multipath" 00:24:14.121 } 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "method": "bdev_nvme_set_hotplug", 00:24:14.121 "params": { 00:24:14.121 "period_us": 100000, 00:24:14.121 "enable": false 00:24:14.121 } 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "method": "bdev_wait_for_examine" 00:24:14.121 } 00:24:14.121 ] 00:24:14.121 }, 00:24:14.121 { 00:24:14.121 "subsystem": "nbd", 00:24:14.121 "config": [] 00:24:14.121 } 00:24:14.121 ] 00:24:14.121 }' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 277313 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277313 ']' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277313 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277313 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277313' 00:24:14.121 killing process with pid 277313 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277313 00:24:14.121 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.121 00:24:14.121 Latency(us) 00:24:14.121 [2024-11-17T10:18:38.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.121 [2024-11-17T10:18:38.779Z] =================================================================================================================== 00:24:14.121 [2024-11-17T10:18:38.779Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277313 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 277028 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277028 ']' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277028 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277028 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277028' 00:24:14.121 killing process with pid 277028 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277028 00:24:14.121 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277028 00:24:14.381 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:14.381 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:14.381 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.381 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:14.381 "subsystems": [ 00:24:14.381 { 00:24:14.381 "subsystem": "keyring", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "keyring_file_add_key", 00:24:14.381 "params": { 00:24:14.381 "name": "key0", 00:24:14.381 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:14.381 } 00:24:14.381 } 00:24:14.381 ] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "iobuf", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "iobuf_set_options", 00:24:14.381 "params": { 00:24:14.381 "small_pool_count": 8192, 00:24:14.381 "large_pool_count": 1024, 00:24:14.381 "small_bufsize": 8192, 00:24:14.381 "large_bufsize": 135168, 00:24:14.381 "enable_numa": false 00:24:14.381 } 00:24:14.381 } 00:24:14.381 ] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "sock", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "sock_set_default_impl", 00:24:14.381 "params": { 00:24:14.381 "impl_name": "posix" 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "sock_impl_set_options", 00:24:14.381 "params": { 00:24:14.381 "impl_name": "ssl", 00:24:14.381 "recv_buf_size": 4096, 00:24:14.381 "send_buf_size": 4096, 00:24:14.381 "enable_recv_pipe": true, 00:24:14.381 "enable_quickack": false, 00:24:14.381 "enable_placement_id": 0, 00:24:14.381 "enable_zerocopy_send_server": true, 00:24:14.381 "enable_zerocopy_send_client": false, 00:24:14.381 "zerocopy_threshold": 0, 00:24:14.381 "tls_version": 0, 00:24:14.381 "enable_ktls": false 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "sock_impl_set_options", 00:24:14.381 "params": { 00:24:14.381 "impl_name": "posix", 00:24:14.381 "recv_buf_size": 2097152, 00:24:14.381 "send_buf_size": 2097152, 00:24:14.381 "enable_recv_pipe": true, 00:24:14.381 "enable_quickack": false, 00:24:14.381 "enable_placement_id": 0, 00:24:14.381 "enable_zerocopy_send_server": true, 00:24:14.381 "enable_zerocopy_send_client": false, 00:24:14.381 "zerocopy_threshold": 0, 00:24:14.381 "tls_version": 0, 00:24:14.381 "enable_ktls": false 00:24:14.381 } 00:24:14.381 } 00:24:14.381 ] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "vmd", 00:24:14.381 "config": [] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "accel", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "accel_set_options", 00:24:14.381 "params": { 00:24:14.381 "small_cache_size": 128, 00:24:14.381 "large_cache_size": 16, 00:24:14.381 "task_count": 2048, 00:24:14.381 "sequence_count": 2048, 00:24:14.381 "buf_count": 2048 00:24:14.381 } 00:24:14.381 } 00:24:14.381 ] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "bdev", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "bdev_set_options", 00:24:14.381 "params": { 00:24:14.381 "bdev_io_pool_size": 65535, 00:24:14.381 "bdev_io_cache_size": 256, 00:24:14.381 "bdev_auto_examine": true, 00:24:14.381 "iobuf_small_cache_size": 128, 00:24:14.381 "iobuf_large_cache_size": 16 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "bdev_raid_set_options", 00:24:14.381 "params": { 00:24:14.381 "process_window_size_kb": 1024, 00:24:14.381 "process_max_bandwidth_mb_sec": 0 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "bdev_iscsi_set_options", 00:24:14.381 "params": { 00:24:14.381 "timeout_sec": 30 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "bdev_nvme_set_options", 00:24:14.381 "params": { 00:24:14.381 "action_on_timeout": "none", 00:24:14.381 "timeout_us": 0, 00:24:14.381 "timeout_admin_us": 0, 00:24:14.381 "keep_alive_timeout_ms": 10000, 00:24:14.381 "arbitration_burst": 0, 00:24:14.381 "low_priority_weight": 0, 00:24:14.381 "medium_priority_weight": 0, 00:24:14.381 "high_priority_weight": 0, 00:24:14.381 "nvme_adminq_poll_period_us": 10000, 00:24:14.381 "nvme_ioq_poll_period_us": 0, 00:24:14.381 "io_queue_requests": 0, 00:24:14.381 "delay_cmd_submit": true, 00:24:14.381 "transport_retry_count": 4, 00:24:14.381 "bdev_retry_count": 3, 00:24:14.381 "transport_ack_timeout": 0, 00:24:14.381 "ctrlr_loss_timeout_sec": 0, 00:24:14.381 "reconnect_delay_sec": 0, 00:24:14.381 "fast_io_fail_timeout_sec": 0, 00:24:14.381 "disable_auto_failback": false, 00:24:14.381 "generate_uuids": false, 00:24:14.381 "transport_tos": 0, 00:24:14.381 "nvme_error_stat": false, 00:24:14.381 "rdma_srq_size": 0, 00:24:14.381 "io_path_stat": false, 00:24:14.381 "allow_accel_sequence": false, 00:24:14.381 "rdma_max_cq_size": 0, 00:24:14.381 "rdma_cm_event_timeout_ms": 0, 00:24:14.381 "dhchap_digests": [ 00:24:14.381 "sha256", 00:24:14.381 "sha384", 00:24:14.381 "sha512" 00:24:14.381 ], 00:24:14.381 "dhchap_dhgroups": [ 00:24:14.381 "null", 00:24:14.381 "ffdhe2048", 00:24:14.381 "ffdhe3072", 00:24:14.381 "ffdhe4096", 00:24:14.381 "ffdhe6144", 00:24:14.381 "ffdhe8192" 00:24:14.381 ] 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "bdev_nvme_set_hotplug", 00:24:14.381 "params": { 00:24:14.381 "period_us": 100000, 00:24:14.381 "enable": false 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "bdev_malloc_create", 00:24:14.381 "params": { 00:24:14.381 "name": "malloc0", 00:24:14.381 "num_blocks": 8192, 00:24:14.381 "block_size": 4096, 00:24:14.381 "physical_block_size": 4096, 00:24:14.381 "uuid": "b48756b2-6672-428a-8832-abf0a4eae762", 00:24:14.381 "optimal_io_boundary": 0, 00:24:14.381 "md_size": 0, 00:24:14.381 "dif_type": 0, 00:24:14.381 "dif_is_head_of_md": false, 00:24:14.381 "dif_pi_format": 0 00:24:14.381 } 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "method": "bdev_wait_for_examine" 00:24:14.381 } 00:24:14.381 ] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "nbd", 00:24:14.381 "config": [] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "scheduler", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "framework_set_scheduler", 00:24:14.381 "params": { 00:24:14.381 "name": "static" 00:24:14.381 } 00:24:14.381 } 00:24:14.381 ] 00:24:14.381 }, 00:24:14.381 { 00:24:14.381 "subsystem": "nvmf", 00:24:14.381 "config": [ 00:24:14.381 { 00:24:14.381 "method": "nvmf_set_config", 00:24:14.381 "params": { 00:24:14.381 "discovery_filter": "match_any", 00:24:14.381 "admin_cmd_passthru": { 00:24:14.381 "identify_ctrlr": false 00:24:14.381 }, 00:24:14.381 "dhchap_digests": [ 00:24:14.381 "sha256", 00:24:14.381 "sha384", 00:24:14.381 "sha512" 00:24:14.381 ], 00:24:14.381 "dhchap_dhgroups": [ 00:24:14.382 "null", 00:24:14.382 "ffdhe2048", 00:24:14.382 "ffdhe3072", 00:24:14.382 "ffdhe4096", 00:24:14.382 "ffdhe6144", 00:24:14.382 "ffdhe8192" 00:24:14.382 ] 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_set_max_subsystems", 00:24:14.382 "params": { 00:24:14.382 "max_subsystems": 1024 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_set_crdt", 00:24:14.382 "params": { 00:24:14.382 "crdt1": 0, 00:24:14.382 "crdt2": 0, 00:24:14.382 "crdt3": 0 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_create_transport", 00:24:14.382 "params": { 00:24:14.382 "trtype": "TCP", 00:24:14.382 "max_queue_depth": 128, 00:24:14.382 "max_io_qpairs_per_ctrlr": 127, 00:24:14.382 "in_capsule_data_size": 4096, 00:24:14.382 "max_io_size": 131072, 00:24:14.382 "io_unit_size": 131072, 00:24:14.382 "max_aq_depth": 128, 00:24:14.382 "num_shared_buffers": 511, 00:24:14.382 "buf_cache_size": 4294967295, 00:24:14.382 "dif_insert_or_strip": false, 00:24:14.382 "zcopy": false, 00:24:14.382 "c2h_success": false, 00:24:14.382 "sock_priority": 0, 00:24:14.382 "abort_timeout_sec": 1, 00:24:14.382 "ack_timeout": 0, 00:24:14.382 "data_wr_pool_size": 0 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_create_subsystem", 00:24:14.382 "params": { 00:24:14.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.382 "allow_any_host": false, 00:24:14.382 "serial_number": "SPDK00000000000001", 00:24:14.382 "model_number": "SPDK bdev Controller", 00:24:14.382 "max_namespaces": 10, 00:24:14.382 "min_cntlid": 1, 00:24:14.382 "max_cntlid": 65519, 00:24:14.382 "ana_reporting": false 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_subsystem_add_host", 00:24:14.382 "params": { 00:24:14.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.382 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.382 "psk": "key0" 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_subsystem_add_ns", 00:24:14.382 "params": { 00:24:14.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.382 "namespace": { 00:24:14.382 "nsid": 1, 00:24:14.382 "bdev_name": "malloc0", 00:24:14.382 "nguid": "B48756B26672428A8832ABF0A4EAE762", 00:24:14.382 "uuid": "b48756b2-6672-428a-8832-abf0a4eae762", 00:24:14.382 "no_auto_visible": false 00:24:14.382 } 00:24:14.382 } 00:24:14.382 }, 00:24:14.382 { 00:24:14.382 "method": "nvmf_subsystem_add_listener", 00:24:14.382 "params": { 00:24:14.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.382 "listen_address": { 00:24:14.382 "trtype": "TCP", 00:24:14.382 "adrfam": "IPv4", 00:24:14.382 "traddr": "10.0.0.2", 00:24:14.382 "trsvcid": "4420" 00:24:14.382 }, 00:24:14.382 "secure_channel": true 00:24:14.382 } 00:24:14.382 } 00:24:14.382 ] 00:24:14.382 } 00:24:14.382 ] 00:24:14.382 }' 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277478 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277478 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277478 ']' 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:14.382 11:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.382 [2024-11-17 11:18:39.001045] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:14.382 [2024-11-17 11:18:39.001131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.641 [2024-11-17 11:18:39.072357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.641 [2024-11-17 11:18:39.118213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.641 [2024-11-17 11:18:39.118272] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.641 [2024-11-17 11:18:39.118300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.641 [2024-11-17 11:18:39.118311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.641 [2024-11-17 11:18:39.118320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.641 [2024-11-17 11:18:39.118966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.898 [2024-11-17 11:18:39.354242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.898 [2024-11-17 11:18:39.386271] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:14.898 [2024-11-17 11:18:39.386599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=277629 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 277629 /var/tmp/bdevperf.sock 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277629 ']' 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.465 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:15.465 "subsystems": [ 00:24:15.465 { 00:24:15.465 "subsystem": "keyring", 00:24:15.465 "config": [ 00:24:15.465 { 00:24:15.465 "method": "keyring_file_add_key", 00:24:15.465 "params": { 00:24:15.465 "name": "key0", 00:24:15.465 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:15.465 } 00:24:15.465 } 00:24:15.465 ] 00:24:15.465 }, 00:24:15.465 { 00:24:15.465 "subsystem": "iobuf", 00:24:15.465 "config": [ 00:24:15.465 { 00:24:15.465 "method": "iobuf_set_options", 00:24:15.465 "params": { 00:24:15.465 "small_pool_count": 8192, 00:24:15.465 "large_pool_count": 1024, 00:24:15.465 "small_bufsize": 8192, 00:24:15.465 "large_bufsize": 135168, 00:24:15.465 "enable_numa": false 00:24:15.465 } 00:24:15.465 } 00:24:15.465 ] 00:24:15.465 }, 00:24:15.465 { 00:24:15.465 "subsystem": "sock", 00:24:15.465 "config": [ 00:24:15.465 { 00:24:15.465 "method": "sock_set_default_impl", 00:24:15.465 "params": { 00:24:15.466 "impl_name": "posix" 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "sock_impl_set_options", 00:24:15.466 "params": { 00:24:15.466 "impl_name": "ssl", 00:24:15.466 "recv_buf_size": 4096, 00:24:15.466 "send_buf_size": 4096, 00:24:15.466 "enable_recv_pipe": true, 00:24:15.466 "enable_quickack": false, 00:24:15.466 "enable_placement_id": 0, 00:24:15.466 "enable_zerocopy_send_server": true, 00:24:15.466 "enable_zerocopy_send_client": false, 00:24:15.466 "zerocopy_threshold": 0, 00:24:15.466 "tls_version": 0, 00:24:15.466 "enable_ktls": false 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "sock_impl_set_options", 00:24:15.466 "params": { 00:24:15.466 "impl_name": "posix", 00:24:15.466 "recv_buf_size": 2097152, 00:24:15.466 "send_buf_size": 2097152, 00:24:15.466 "enable_recv_pipe": true, 00:24:15.466 "enable_quickack": false, 00:24:15.466 "enable_placement_id": 0, 00:24:15.466 "enable_zerocopy_send_server": true, 00:24:15.466 "enable_zerocopy_send_client": false, 00:24:15.466 "zerocopy_threshold": 0, 00:24:15.466 "tls_version": 0, 00:24:15.466 "enable_ktls": false 00:24:15.466 } 00:24:15.466 } 00:24:15.466 ] 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "subsystem": "vmd", 00:24:15.466 "config": [] 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "subsystem": "accel", 00:24:15.466 "config": [ 00:24:15.466 { 00:24:15.466 "method": "accel_set_options", 00:24:15.466 "params": { 00:24:15.466 "small_cache_size": 128, 00:24:15.466 "large_cache_size": 16, 00:24:15.466 "task_count": 2048, 00:24:15.466 "sequence_count": 2048, 00:24:15.466 "buf_count": 2048 00:24:15.466 } 00:24:15.466 } 00:24:15.466 ] 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "subsystem": "bdev", 00:24:15.466 "config": [ 00:24:15.466 { 00:24:15.466 "method": "bdev_set_options", 00:24:15.466 "params": { 00:24:15.466 "bdev_io_pool_size": 65535, 00:24:15.466 "bdev_io_cache_size": 256, 00:24:15.466 "bdev_auto_examine": true, 00:24:15.466 "iobuf_small_cache_size": 128, 00:24:15.466 "iobuf_large_cache_size": 16 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "bdev_raid_set_options", 00:24:15.466 "params": { 00:24:15.466 "process_window_size_kb": 1024, 00:24:15.466 "process_max_bandwidth_mb_sec": 0 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "bdev_iscsi_set_options", 00:24:15.466 "params": { 00:24:15.466 "timeout_sec": 30 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "bdev_nvme_set_options", 00:24:15.466 "params": { 00:24:15.466 "action_on_timeout": "none", 00:24:15.466 "timeout_us": 0, 00:24:15.466 "timeout_admin_us": 0, 00:24:15.466 "keep_alive_timeout_ms": 10000, 00:24:15.466 "arbitration_burst": 0, 00:24:15.466 "low_priority_weight": 0, 00:24:15.466 "medium_priority_weight": 0, 00:24:15.466 "high_priority_weight": 0, 00:24:15.466 "nvme_adminq_poll_period_us": 10000, 00:24:15.466 "nvme_ioq_poll_period_us": 0, 00:24:15.466 "io_queue_requests": 512, 00:24:15.466 "delay_cmd_submit": true, 00:24:15.466 "transport_retry_count": 4, 00:24:15.466 "bdev_retry_count": 3, 00:24:15.466 "transport_ack_timeout": 0, 00:24:15.466 "ctrlr_loss_timeout_sec": 0, 00:24:15.466 "reconnect_delay_sec": 0, 00:24:15.466 "fast_io_fail_timeout_sec": 0, 00:24:15.466 "disable_auto_failback": false, 00:24:15.466 "generate_uuids": false, 00:24:15.466 "transport_tos": 0, 00:24:15.466 "nvme_error_stat": false, 00:24:15.466 "rdma_srq_size": 0, 00:24:15.466 "io_path_stat": false, 00:24:15.466 "allow_accel_sequence": false, 00:24:15.466 "rdma_max_cq_size": 0, 00:24:15.466 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.466 , 00:24:15.466 "dhchap_digests": [ 00:24:15.466 "sha256", 00:24:15.466 "sha384", 00:24:15.466 "sha512" 00:24:15.466 ], 00:24:15.466 "dhchap_dhgroups": [ 00:24:15.466 "null", 00:24:15.466 "ffdhe2048", 00:24:15.466 "ffdhe3072", 00:24:15.466 "ffdhe4096", 00:24:15.466 "ffdhe6144", 00:24:15.466 "ffdhe8192" 00:24:15.466 ] 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "bdev_nvme_attach_controller", 00:24:15.466 "params": { 00:24:15.466 "name": "TLSTEST", 00:24:15.466 "trtype": "TCP", 00:24:15.466 "adrfam": "IPv4", 00:24:15.466 "traddr": "10.0.0.2", 00:24:15.466 "trsvcid": "4420", 00:24:15.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.466 "prchk_reftag": false, 00:24:15.466 "prchk_guard": false, 00:24:15.466 "ctrlr_loss_timeout_sec": 0, 00:24:15.466 "reconnect_delay_sec": 0, 00:24:15.466 "fast_io_fail_timeout_sec": 0, 00:24:15.466 "psk": "key0", 00:24:15.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.466 "hdgst": false, 00:24:15.466 "ddgst": false, 00:24:15.466 "multipath": "multipath" 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "bdev_nvme_set_hotplug", 00:24:15.466 "params": { 00:24:15.466 "period_us": 100000, 00:24:15.466 "enable": false 00:24:15.466 } 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "method": "bdev_wait_for_examine" 00:24:15.466 } 00:24:15.466 ] 00:24:15.466 }, 00:24:15.466 { 00:24:15.466 "subsystem": "nbd", 00:24:15.466 "config": [] 00:24:15.466 } 00:24:15.466 ] 00:24:15.466 }' 00:24:15.466 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.466 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.725 [2024-11-17 11:18:40.127006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:15.725 [2024-11-17 11:18:40.127077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277629 ] 00:24:15.725 [2024-11-17 11:18:40.200663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.725 [2024-11-17 11:18:40.248208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.983 [2024-11-17 11:18:40.424383] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:15.983 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.983 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.983 11:18:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:16.242 Running I/O for 10 seconds... 00:24:18.109 3277.00 IOPS, 12.80 MiB/s [2024-11-17T10:18:43.700Z] 3314.50 IOPS, 12.95 MiB/s [2024-11-17T10:18:45.072Z] 3286.00 IOPS, 12.84 MiB/s [2024-11-17T10:18:46.006Z] 3305.25 IOPS, 12.91 MiB/s [2024-11-17T10:18:46.937Z] 3283.20 IOPS, 12.82 MiB/s [2024-11-17T10:18:47.870Z] 3279.17 IOPS, 12.81 MiB/s [2024-11-17T10:18:48.803Z] 3258.14 IOPS, 12.73 MiB/s [2024-11-17T10:18:49.743Z] 3259.88 IOPS, 12.73 MiB/s [2024-11-17T10:18:50.676Z] 3250.56 IOPS, 12.70 MiB/s [2024-11-17T10:18:50.935Z] 3268.40 IOPS, 12.77 MiB/s 00:24:26.277 Latency(us) 00:24:26.277 [2024-11-17T10:18:50.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.277 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.277 Verification LBA range: start 0x0 length 0x2000 00:24:26.277 TLSTESTn1 : 10.03 3272.29 12.78 0.00 0.00 39046.79 5898.24 45244.11 00:24:26.277 [2024-11-17T10:18:50.935Z] =================================================================================================================== 00:24:26.277 [2024-11-17T10:18:50.935Z] Total : 3272.29 12.78 0.00 0.00 39046.79 5898.24 45244.11 00:24:26.277 { 00:24:26.277 "results": [ 00:24:26.277 { 00:24:26.277 "job": "TLSTESTn1", 00:24:26.277 "core_mask": "0x4", 00:24:26.277 "workload": "verify", 00:24:26.277 "status": "finished", 00:24:26.277 "verify_range": { 00:24:26.277 "start": 0, 00:24:26.277 "length": 8192 00:24:26.277 }, 00:24:26.277 "queue_depth": 128, 00:24:26.277 "io_size": 4096, 00:24:26.277 "runtime": 10.027224, 00:24:26.277 "iops": 3272.2915135834205, 00:24:26.277 "mibps": 12.782388724935236, 00:24:26.277 "io_failed": 0, 00:24:26.277 "io_timeout": 0, 00:24:26.277 "avg_latency_us": 39046.79085826775, 00:24:26.277 "min_latency_us": 5898.24, 00:24:26.277 "max_latency_us": 45244.112592592595 00:24:26.277 } 00:24:26.277 ], 00:24:26.277 "core_count": 1 00:24:26.277 } 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 277629 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277629 ']' 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277629 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277629 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277629' 00:24:26.277 killing process with pid 277629 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277629 00:24:26.277 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.277 00:24:26.277 Latency(us) 00:24:26.277 [2024-11-17T10:18:50.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.277 [2024-11-17T10:18:50.935Z] =================================================================================================================== 00:24:26.277 [2024-11-17T10:18:50.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.277 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277629 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 277478 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277478 ']' 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277478 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277478 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277478' 00:24:26.536 killing process with pid 277478 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277478 00:24:26.536 11:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277478 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278945 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278945 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278945 ']' 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.795 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.795 [2024-11-17 11:18:51.260758] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:26.795 [2024-11-17 11:18:51.260851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.795 [2024-11-17 11:18:51.334434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.795 [2024-11-17 11:18:51.376615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.795 [2024-11-17 11:18:51.376685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.795 [2024-11-17 11:18:51.376713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.795 [2024-11-17 11:18:51.376724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.795 [2024-11-17 11:18:51.376733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.795 [2024-11-17 11:18:51.377310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.3uUOT2XQVf 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.3uUOT2XQVf 00:24:27.053 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:27.312 [2024-11-17 11:18:51.755428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.312 11:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:27.571 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:27.829 [2024-11-17 11:18:52.280834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.829 [2024-11-17 11:18:52.281081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.829 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:28.088 malloc0 00:24:28.088 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:28.346 11:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:28.604 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=279242 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 279242 /var/tmp/bdevperf.sock 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279242 ']' 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.930 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.930 [2024-11-17 11:18:53.423626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:28.930 [2024-11-17 11:18:53.423698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279242 ] 00:24:28.930 [2024-11-17 11:18:53.491292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.231 [2024-11-17 11:18:53.539060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.231 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.231 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:29.231 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:29.529 11:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:29.529 [2024-11-17 11:18:54.168404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.798 nvme0n1 00:24:29.798 11:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.798 Running I/O for 1 seconds... 00:24:30.793 3093.00 IOPS, 12.08 MiB/s 00:24:30.793 Latency(us) 00:24:30.793 [2024-11-17T10:18:55.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.793 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:30.793 Verification LBA range: start 0x0 length 0x2000 00:24:30.793 nvme0n1 : 1.02 3167.87 12.37 0.00 0.00 40110.25 6505.05 40777.96 00:24:30.793 [2024-11-17T10:18:55.451Z] =================================================================================================================== 00:24:30.793 [2024-11-17T10:18:55.451Z] Total : 3167.87 12.37 0.00 0.00 40110.25 6505.05 40777.96 00:24:30.793 { 00:24:30.793 "results": [ 00:24:30.793 { 00:24:30.793 "job": "nvme0n1", 00:24:30.793 "core_mask": "0x2", 00:24:30.793 "workload": "verify", 00:24:30.793 "status": "finished", 00:24:30.793 "verify_range": { 00:24:30.793 "start": 0, 00:24:30.793 "length": 8192 00:24:30.793 }, 00:24:30.793 "queue_depth": 128, 00:24:30.793 "io_size": 4096, 00:24:30.793 "runtime": 1.01677, 00:24:30.793 "iops": 3167.8747406001357, 00:24:30.793 "mibps": 12.37451070546928, 00:24:30.793 "io_failed": 0, 00:24:30.793 "io_timeout": 0, 00:24:30.793 "avg_latency_us": 40110.252026630784, 00:24:30.793 "min_latency_us": 6505.054814814815, 00:24:30.793 "max_latency_us": 40777.955555555556 00:24:30.793 } 00:24:30.793 ], 00:24:30.793 "core_count": 1 00:24:30.793 } 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 279242 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279242 ']' 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279242 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279242 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279242' 00:24:30.793 killing process with pid 279242 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279242 00:24:30.793 Received shutdown signal, test time was about 1.000000 seconds 00:24:30.793 00:24:30.793 Latency(us) 00:24:30.793 [2024-11-17T10:18:55.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.793 [2024-11-17T10:18:55.451Z] =================================================================================================================== 00:24:30.793 [2024-11-17T10:18:55.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:30.793 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279242 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 278945 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278945 ']' 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278945 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278945 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278945' 00:24:31.073 killing process with pid 278945 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278945 00:24:31.073 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278945 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=279532 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 279532 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279532 ']' 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.338 11:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.338 [2024-11-17 11:18:55.942678] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:31.338 [2024-11-17 11:18:55.942755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.597 [2024-11-17 11:18:56.012690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.597 [2024-11-17 11:18:56.053089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.597 [2024-11-17 11:18:56.053181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.597 [2024-11-17 11:18:56.053195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.597 [2024-11-17 11:18:56.053206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.597 [2024-11-17 11:18:56.053215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.597 [2024-11-17 11:18:56.053856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.597 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.597 [2024-11-17 11:18:56.201377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.597 malloc0 00:24:31.597 [2024-11-17 11:18:56.232878] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.597 [2024-11-17 11:18:56.233178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=279668 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 279668 /var/tmp/bdevperf.sock 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279668 ']' 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.856 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.856 [2024-11-17 11:18:56.303501] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:31.856 [2024-11-17 11:18:56.303605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279668 ] 00:24:31.856 [2024-11-17 11:18:56.370765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.856 [2024-11-17 11:18:56.416044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.114 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.114 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:32.114 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3uUOT2XQVf 00:24:32.372 11:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:32.631 [2024-11-17 11:18:57.072539] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.631 nvme0n1 00:24:32.631 11:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:32.631 Running I/O for 1 seconds... 00:24:34.004 3344.00 IOPS, 13.06 MiB/s 00:24:34.004 Latency(us) 00:24:34.004 [2024-11-17T10:18:58.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.004 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.004 Verification LBA range: start 0x0 length 0x2000 00:24:34.004 nvme0n1 : 1.02 3402.38 13.29 0.00 0.00 37285.32 7281.78 35146.71 00:24:34.004 [2024-11-17T10:18:58.662Z] =================================================================================================================== 00:24:34.004 [2024-11-17T10:18:58.662Z] Total : 3402.38 13.29 0.00 0.00 37285.32 7281.78 35146.71 00:24:34.004 { 00:24:34.004 "results": [ 00:24:34.004 { 00:24:34.004 "job": "nvme0n1", 00:24:34.004 "core_mask": "0x2", 00:24:34.004 "workload": "verify", 00:24:34.004 "status": "finished", 00:24:34.004 "verify_range": { 00:24:34.004 "start": 0, 00:24:34.004 "length": 8192 00:24:34.004 }, 00:24:34.004 "queue_depth": 128, 00:24:34.004 "io_size": 4096, 00:24:34.004 "runtime": 1.020461, 00:24:34.004 "iops": 3402.383824565564, 00:24:34.004 "mibps": 13.290561814709234, 00:24:34.004 "io_failed": 0, 00:24:34.004 "io_timeout": 0, 00:24:34.004 "avg_latency_us": 37285.323830005116, 00:24:34.004 "min_latency_us": 7281.777777777777, 00:24:34.004 "max_latency_us": 35146.71407407407 00:24:34.004 } 00:24:34.004 ], 00:24:34.004 "core_count": 1 00:24:34.004 } 00:24:34.004 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:34.004 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.004 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.004 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.004 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:34.004 "subsystems": [ 00:24:34.004 { 00:24:34.004 "subsystem": "keyring", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "keyring_file_add_key", 00:24:34.005 "params": { 00:24:34.005 "name": "key0", 00:24:34.005 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:34.005 } 00:24:34.005 } 00:24:34.005 ] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "iobuf", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "iobuf_set_options", 00:24:34.005 "params": { 00:24:34.005 "small_pool_count": 8192, 00:24:34.005 "large_pool_count": 1024, 00:24:34.005 "small_bufsize": 8192, 00:24:34.005 "large_bufsize": 135168, 00:24:34.005 "enable_numa": false 00:24:34.005 } 00:24:34.005 } 00:24:34.005 ] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "sock", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "sock_set_default_impl", 00:24:34.005 "params": { 00:24:34.005 "impl_name": "posix" 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "sock_impl_set_options", 00:24:34.005 "params": { 00:24:34.005 "impl_name": "ssl", 00:24:34.005 "recv_buf_size": 4096, 00:24:34.005 "send_buf_size": 4096, 00:24:34.005 "enable_recv_pipe": true, 00:24:34.005 "enable_quickack": false, 00:24:34.005 "enable_placement_id": 0, 00:24:34.005 "enable_zerocopy_send_server": true, 00:24:34.005 "enable_zerocopy_send_client": false, 00:24:34.005 "zerocopy_threshold": 0, 00:24:34.005 "tls_version": 0, 00:24:34.005 "enable_ktls": false 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "sock_impl_set_options", 00:24:34.005 "params": { 00:24:34.005 "impl_name": "posix", 00:24:34.005 "recv_buf_size": 2097152, 00:24:34.005 "send_buf_size": 2097152, 00:24:34.005 "enable_recv_pipe": true, 00:24:34.005 "enable_quickack": false, 00:24:34.005 "enable_placement_id": 0, 00:24:34.005 "enable_zerocopy_send_server": true, 00:24:34.005 "enable_zerocopy_send_client": false, 00:24:34.005 "zerocopy_threshold": 0, 00:24:34.005 "tls_version": 0, 00:24:34.005 "enable_ktls": false 00:24:34.005 } 00:24:34.005 } 00:24:34.005 ] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "vmd", 00:24:34.005 "config": [] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "accel", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "accel_set_options", 00:24:34.005 "params": { 00:24:34.005 "small_cache_size": 128, 00:24:34.005 "large_cache_size": 16, 00:24:34.005 "task_count": 2048, 00:24:34.005 "sequence_count": 2048, 00:24:34.005 "buf_count": 2048 00:24:34.005 } 00:24:34.005 } 00:24:34.005 ] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "bdev", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "bdev_set_options", 00:24:34.005 "params": { 00:24:34.005 "bdev_io_pool_size": 65535, 00:24:34.005 "bdev_io_cache_size": 256, 00:24:34.005 "bdev_auto_examine": true, 00:24:34.005 "iobuf_small_cache_size": 128, 00:24:34.005 "iobuf_large_cache_size": 16 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "bdev_raid_set_options", 00:24:34.005 "params": { 00:24:34.005 "process_window_size_kb": 1024, 00:24:34.005 "process_max_bandwidth_mb_sec": 0 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "bdev_iscsi_set_options", 00:24:34.005 "params": { 00:24:34.005 "timeout_sec": 30 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "bdev_nvme_set_options", 00:24:34.005 "params": { 00:24:34.005 "action_on_timeout": "none", 00:24:34.005 "timeout_us": 0, 00:24:34.005 "timeout_admin_us": 0, 00:24:34.005 "keep_alive_timeout_ms": 10000, 00:24:34.005 "arbitration_burst": 0, 00:24:34.005 "low_priority_weight": 0, 00:24:34.005 "medium_priority_weight": 0, 00:24:34.005 "high_priority_weight": 0, 00:24:34.005 "nvme_adminq_poll_period_us": 10000, 00:24:34.005 "nvme_ioq_poll_period_us": 0, 00:24:34.005 "io_queue_requests": 0, 00:24:34.005 "delay_cmd_submit": true, 00:24:34.005 "transport_retry_count": 4, 00:24:34.005 "bdev_retry_count": 3, 00:24:34.005 "transport_ack_timeout": 0, 00:24:34.005 "ctrlr_loss_timeout_sec": 0, 00:24:34.005 "reconnect_delay_sec": 0, 00:24:34.005 "fast_io_fail_timeout_sec": 0, 00:24:34.005 "disable_auto_failback": false, 00:24:34.005 "generate_uuids": false, 00:24:34.005 "transport_tos": 0, 00:24:34.005 "nvme_error_stat": false, 00:24:34.005 "rdma_srq_size": 0, 00:24:34.005 "io_path_stat": false, 00:24:34.005 "allow_accel_sequence": false, 00:24:34.005 "rdma_max_cq_size": 0, 00:24:34.005 "rdma_cm_event_timeout_ms": 0, 00:24:34.005 "dhchap_digests": [ 00:24:34.005 "sha256", 00:24:34.005 "sha384", 00:24:34.005 "sha512" 00:24:34.005 ], 00:24:34.005 "dhchap_dhgroups": [ 00:24:34.005 "null", 00:24:34.005 "ffdhe2048", 00:24:34.005 "ffdhe3072", 00:24:34.005 "ffdhe4096", 00:24:34.005 "ffdhe6144", 00:24:34.005 "ffdhe8192" 00:24:34.005 ] 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "bdev_nvme_set_hotplug", 00:24:34.005 "params": { 00:24:34.005 "period_us": 100000, 00:24:34.005 "enable": false 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "bdev_malloc_create", 00:24:34.005 "params": { 00:24:34.005 "name": "malloc0", 00:24:34.005 "num_blocks": 8192, 00:24:34.005 "block_size": 4096, 00:24:34.005 "physical_block_size": 4096, 00:24:34.005 "uuid": "e43121e3-5090-4b10-b3d4-c9e92daa50e2", 00:24:34.005 "optimal_io_boundary": 0, 00:24:34.005 "md_size": 0, 00:24:34.005 "dif_type": 0, 00:24:34.005 "dif_is_head_of_md": false, 00:24:34.005 "dif_pi_format": 0 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "bdev_wait_for_examine" 00:24:34.005 } 00:24:34.005 ] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "nbd", 00:24:34.005 "config": [] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "scheduler", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "framework_set_scheduler", 00:24:34.005 "params": { 00:24:34.005 "name": "static" 00:24:34.005 } 00:24:34.005 } 00:24:34.005 ] 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "subsystem": "nvmf", 00:24:34.005 "config": [ 00:24:34.005 { 00:24:34.005 "method": "nvmf_set_config", 00:24:34.005 "params": { 00:24:34.005 "discovery_filter": "match_any", 00:24:34.005 "admin_cmd_passthru": { 00:24:34.005 "identify_ctrlr": false 00:24:34.005 }, 00:24:34.005 "dhchap_digests": [ 00:24:34.005 "sha256", 00:24:34.005 "sha384", 00:24:34.005 "sha512" 00:24:34.005 ], 00:24:34.005 "dhchap_dhgroups": [ 00:24:34.005 "null", 00:24:34.005 "ffdhe2048", 00:24:34.005 "ffdhe3072", 00:24:34.005 "ffdhe4096", 00:24:34.005 "ffdhe6144", 00:24:34.005 "ffdhe8192" 00:24:34.005 ] 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "nvmf_set_max_subsystems", 00:24:34.005 "params": { 00:24:34.005 "max_subsystems": 1024 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "nvmf_set_crdt", 00:24:34.005 "params": { 00:24:34.005 "crdt1": 0, 00:24:34.005 "crdt2": 0, 00:24:34.005 "crdt3": 0 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "nvmf_create_transport", 00:24:34.005 "params": { 00:24:34.005 "trtype": "TCP", 00:24:34.005 "max_queue_depth": 128, 00:24:34.005 "max_io_qpairs_per_ctrlr": 127, 00:24:34.005 "in_capsule_data_size": 4096, 00:24:34.005 "max_io_size": 131072, 00:24:34.005 "io_unit_size": 131072, 00:24:34.005 "max_aq_depth": 128, 00:24:34.005 "num_shared_buffers": 511, 00:24:34.005 "buf_cache_size": 4294967295, 00:24:34.005 "dif_insert_or_strip": false, 00:24:34.005 "zcopy": false, 00:24:34.005 "c2h_success": false, 00:24:34.005 "sock_priority": 0, 00:24:34.005 "abort_timeout_sec": 1, 00:24:34.005 "ack_timeout": 0, 00:24:34.005 "data_wr_pool_size": 0 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "nvmf_create_subsystem", 00:24:34.005 "params": { 00:24:34.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.005 "allow_any_host": false, 00:24:34.005 "serial_number": "00000000000000000000", 00:24:34.005 "model_number": "SPDK bdev Controller", 00:24:34.005 "max_namespaces": 32, 00:24:34.005 "min_cntlid": 1, 00:24:34.005 "max_cntlid": 65519, 00:24:34.005 "ana_reporting": false 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "nvmf_subsystem_add_host", 00:24:34.005 "params": { 00:24:34.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.005 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.005 "psk": "key0" 00:24:34.005 } 00:24:34.005 }, 00:24:34.005 { 00:24:34.005 "method": "nvmf_subsystem_add_ns", 00:24:34.005 "params": { 00:24:34.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.005 "namespace": { 00:24:34.005 "nsid": 1, 00:24:34.006 "bdev_name": "malloc0", 00:24:34.006 "nguid": "E43121E350904B10B3D4C9E92DAA50E2", 00:24:34.006 "uuid": "e43121e3-5090-4b10-b3d4-c9e92daa50e2", 00:24:34.006 "no_auto_visible": false 00:24:34.006 } 00:24:34.006 } 00:24:34.006 }, 00:24:34.006 { 00:24:34.006 "method": "nvmf_subsystem_add_listener", 00:24:34.006 "params": { 00:24:34.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.006 "listen_address": { 00:24:34.006 "trtype": "TCP", 00:24:34.006 "adrfam": "IPv4", 00:24:34.006 "traddr": "10.0.0.2", 00:24:34.006 "trsvcid": "4420" 00:24:34.006 }, 00:24:34.006 "secure_channel": false, 00:24:34.006 "sock_impl": "ssl" 00:24:34.006 } 00:24:34.006 } 00:24:34.006 ] 00:24:34.006 } 00:24:34.006 ] 00:24:34.006 }' 00:24:34.006 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:34.264 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:34.264 "subsystems": [ 00:24:34.264 { 00:24:34.264 "subsystem": "keyring", 00:24:34.264 "config": [ 00:24:34.264 { 00:24:34.264 "method": "keyring_file_add_key", 00:24:34.264 "params": { 00:24:34.264 "name": "key0", 00:24:34.264 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:34.264 } 00:24:34.264 } 00:24:34.264 ] 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "subsystem": "iobuf", 00:24:34.264 "config": [ 00:24:34.264 { 00:24:34.264 "method": "iobuf_set_options", 00:24:34.264 "params": { 00:24:34.264 "small_pool_count": 8192, 00:24:34.264 "large_pool_count": 1024, 00:24:34.264 "small_bufsize": 8192, 00:24:34.264 "large_bufsize": 135168, 00:24:34.264 "enable_numa": false 00:24:34.264 } 00:24:34.264 } 00:24:34.264 ] 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "subsystem": "sock", 00:24:34.264 "config": [ 00:24:34.264 { 00:24:34.264 "method": "sock_set_default_impl", 00:24:34.264 "params": { 00:24:34.264 "impl_name": "posix" 00:24:34.264 } 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "method": "sock_impl_set_options", 00:24:34.264 "params": { 00:24:34.264 "impl_name": "ssl", 00:24:34.264 "recv_buf_size": 4096, 00:24:34.264 "send_buf_size": 4096, 00:24:34.264 "enable_recv_pipe": true, 00:24:34.264 "enable_quickack": false, 00:24:34.264 "enable_placement_id": 0, 00:24:34.264 "enable_zerocopy_send_server": true, 00:24:34.264 "enable_zerocopy_send_client": false, 00:24:34.264 "zerocopy_threshold": 0, 00:24:34.264 "tls_version": 0, 00:24:34.264 "enable_ktls": false 00:24:34.264 } 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "method": "sock_impl_set_options", 00:24:34.264 "params": { 00:24:34.264 "impl_name": "posix", 00:24:34.264 "recv_buf_size": 2097152, 00:24:34.264 "send_buf_size": 2097152, 00:24:34.264 "enable_recv_pipe": true, 00:24:34.264 "enable_quickack": false, 00:24:34.264 "enable_placement_id": 0, 00:24:34.264 "enable_zerocopy_send_server": true, 00:24:34.264 "enable_zerocopy_send_client": false, 00:24:34.264 "zerocopy_threshold": 0, 00:24:34.264 "tls_version": 0, 00:24:34.264 "enable_ktls": false 00:24:34.264 } 00:24:34.264 } 00:24:34.264 ] 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "subsystem": "vmd", 00:24:34.264 "config": [] 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "subsystem": "accel", 00:24:34.264 "config": [ 00:24:34.264 { 00:24:34.264 "method": "accel_set_options", 00:24:34.264 "params": { 00:24:34.264 "small_cache_size": 128, 00:24:34.264 "large_cache_size": 16, 00:24:34.264 "task_count": 2048, 00:24:34.264 "sequence_count": 2048, 00:24:34.264 "buf_count": 2048 00:24:34.264 } 00:24:34.264 } 00:24:34.264 ] 00:24:34.264 }, 00:24:34.264 { 00:24:34.264 "subsystem": "bdev", 00:24:34.264 "config": [ 00:24:34.264 { 00:24:34.264 "method": "bdev_set_options", 00:24:34.264 "params": { 00:24:34.264 "bdev_io_pool_size": 65535, 00:24:34.264 "bdev_io_cache_size": 256, 00:24:34.265 "bdev_auto_examine": true, 00:24:34.265 "iobuf_small_cache_size": 128, 00:24:34.265 "iobuf_large_cache_size": 16 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_raid_set_options", 00:24:34.265 "params": { 00:24:34.265 "process_window_size_kb": 1024, 00:24:34.265 "process_max_bandwidth_mb_sec": 0 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_iscsi_set_options", 00:24:34.265 "params": { 00:24:34.265 "timeout_sec": 30 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_nvme_set_options", 00:24:34.265 "params": { 00:24:34.265 "action_on_timeout": "none", 00:24:34.265 "timeout_us": 0, 00:24:34.265 "timeout_admin_us": 0, 00:24:34.265 "keep_alive_timeout_ms": 10000, 00:24:34.265 "arbitration_burst": 0, 00:24:34.265 "low_priority_weight": 0, 00:24:34.265 "medium_priority_weight": 0, 00:24:34.265 "high_priority_weight": 0, 00:24:34.265 "nvme_adminq_poll_period_us": 10000, 00:24:34.265 "nvme_ioq_poll_period_us": 0, 00:24:34.265 "io_queue_requests": 512, 00:24:34.265 "delay_cmd_submit": true, 00:24:34.265 "transport_retry_count": 4, 00:24:34.265 "bdev_retry_count": 3, 00:24:34.265 "transport_ack_timeout": 0, 00:24:34.265 "ctrlr_loss_timeout_sec": 0, 00:24:34.265 "reconnect_delay_sec": 0, 00:24:34.265 "fast_io_fail_timeout_sec": 0, 00:24:34.265 "disable_auto_failback": false, 00:24:34.265 "generate_uuids": false, 00:24:34.265 "transport_tos": 0, 00:24:34.265 "nvme_error_stat": false, 00:24:34.265 "rdma_srq_size": 0, 00:24:34.265 "io_path_stat": false, 00:24:34.265 "allow_accel_sequence": false, 00:24:34.265 "rdma_max_cq_size": 0, 00:24:34.265 "rdma_cm_event_timeout_ms": 0, 00:24:34.265 "dhchap_digests": [ 00:24:34.265 "sha256", 00:24:34.265 "sha384", 00:24:34.265 "sha512" 00:24:34.265 ], 00:24:34.265 "dhchap_dhgroups": [ 00:24:34.265 "null", 00:24:34.265 "ffdhe2048", 00:24:34.265 "ffdhe3072", 00:24:34.265 "ffdhe4096", 00:24:34.265 "ffdhe6144", 00:24:34.265 "ffdhe8192" 00:24:34.265 ] 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_nvme_attach_controller", 00:24:34.265 "params": { 00:24:34.265 "name": "nvme0", 00:24:34.265 "trtype": "TCP", 00:24:34.265 "adrfam": "IPv4", 00:24:34.265 "traddr": "10.0.0.2", 00:24:34.265 "trsvcid": "4420", 00:24:34.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.265 "prchk_reftag": false, 00:24:34.265 "prchk_guard": false, 00:24:34.265 "ctrlr_loss_timeout_sec": 0, 00:24:34.265 "reconnect_delay_sec": 0, 00:24:34.265 "fast_io_fail_timeout_sec": 0, 00:24:34.265 "psk": "key0", 00:24:34.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.265 "hdgst": false, 00:24:34.265 "ddgst": false, 00:24:34.265 "multipath": "multipath" 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_nvme_set_hotplug", 00:24:34.265 "params": { 00:24:34.265 "period_us": 100000, 00:24:34.265 "enable": false 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_enable_histogram", 00:24:34.265 "params": { 00:24:34.265 "name": "nvme0n1", 00:24:34.265 "enable": true 00:24:34.265 } 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "method": "bdev_wait_for_examine" 00:24:34.265 } 00:24:34.265 ] 00:24:34.265 }, 00:24:34.265 { 00:24:34.265 "subsystem": "nbd", 00:24:34.265 "config": [] 00:24:34.265 } 00:24:34.265 ] 00:24:34.265 }' 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 279668 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279668 ']' 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279668 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279668 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279668' 00:24:34.265 killing process with pid 279668 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279668 00:24:34.265 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.265 00:24:34.265 Latency(us) 00:24:34.265 [2024-11-17T10:18:58.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.265 [2024-11-17T10:18:58.923Z] =================================================================================================================== 00:24:34.265 [2024-11-17T10:18:58.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.265 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279668 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 279532 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279532 ']' 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279532 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279532 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279532' 00:24:34.524 killing process with pid 279532 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279532 00:24:34.524 11:18:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279532 00:24:34.783 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:34.783 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.783 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:34.783 "subsystems": [ 00:24:34.783 { 00:24:34.783 "subsystem": "keyring", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "keyring_file_add_key", 00:24:34.783 "params": { 00:24:34.783 "name": "key0", 00:24:34.783 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:34.783 } 00:24:34.783 } 00:24:34.783 ] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "iobuf", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "iobuf_set_options", 00:24:34.783 "params": { 00:24:34.783 "small_pool_count": 8192, 00:24:34.783 "large_pool_count": 1024, 00:24:34.783 "small_bufsize": 8192, 00:24:34.783 "large_bufsize": 135168, 00:24:34.783 "enable_numa": false 00:24:34.783 } 00:24:34.783 } 00:24:34.783 ] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "sock", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "sock_set_default_impl", 00:24:34.783 "params": { 00:24:34.783 "impl_name": "posix" 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "sock_impl_set_options", 00:24:34.783 "params": { 00:24:34.783 "impl_name": "ssl", 00:24:34.783 "recv_buf_size": 4096, 00:24:34.783 "send_buf_size": 4096, 00:24:34.783 "enable_recv_pipe": true, 00:24:34.783 "enable_quickack": false, 00:24:34.783 "enable_placement_id": 0, 00:24:34.783 "enable_zerocopy_send_server": true, 00:24:34.783 "enable_zerocopy_send_client": false, 00:24:34.783 "zerocopy_threshold": 0, 00:24:34.783 "tls_version": 0, 00:24:34.783 "enable_ktls": false 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "sock_impl_set_options", 00:24:34.783 "params": { 00:24:34.783 "impl_name": "posix", 00:24:34.783 "recv_buf_size": 2097152, 00:24:34.783 "send_buf_size": 2097152, 00:24:34.783 "enable_recv_pipe": true, 00:24:34.783 "enable_quickack": false, 00:24:34.783 "enable_placement_id": 0, 00:24:34.783 "enable_zerocopy_send_server": true, 00:24:34.783 "enable_zerocopy_send_client": false, 00:24:34.783 "zerocopy_threshold": 0, 00:24:34.783 "tls_version": 0, 00:24:34.783 "enable_ktls": false 00:24:34.783 } 00:24:34.783 } 00:24:34.783 ] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "vmd", 00:24:34.783 "config": [] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "accel", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "accel_set_options", 00:24:34.783 "params": { 00:24:34.783 "small_cache_size": 128, 00:24:34.783 "large_cache_size": 16, 00:24:34.783 "task_count": 2048, 00:24:34.783 "sequence_count": 2048, 00:24:34.783 "buf_count": 2048 00:24:34.783 } 00:24:34.783 } 00:24:34.783 ] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "bdev", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "bdev_set_options", 00:24:34.783 "params": { 00:24:34.783 "bdev_io_pool_size": 65535, 00:24:34.783 "bdev_io_cache_size": 256, 00:24:34.783 "bdev_auto_examine": true, 00:24:34.783 "iobuf_small_cache_size": 128, 00:24:34.783 "iobuf_large_cache_size": 16 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "bdev_raid_set_options", 00:24:34.783 "params": { 00:24:34.783 "process_window_size_kb": 1024, 00:24:34.783 "process_max_bandwidth_mb_sec": 0 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "bdev_iscsi_set_options", 00:24:34.783 "params": { 00:24:34.783 "timeout_sec": 30 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "bdev_nvme_set_options", 00:24:34.783 "params": { 00:24:34.783 "action_on_timeout": "none", 00:24:34.783 "timeout_us": 0, 00:24:34.783 "timeout_admin_us": 0, 00:24:34.783 "keep_alive_timeout_ms": 10000, 00:24:34.783 "arbitration_burst": 0, 00:24:34.783 "low_priority_weight": 0, 00:24:34.783 "medium_priority_weight": 0, 00:24:34.783 "high_priority_weight": 0, 00:24:34.783 "nvme_adminq_poll_period_us": 10000, 00:24:34.783 "nvme_ioq_poll_period_us": 0, 00:24:34.783 "io_queue_requests": 0, 00:24:34.783 "delay_cmd_submit": true, 00:24:34.783 "transport_retry_count": 4, 00:24:34.783 "bdev_retry_count": 3, 00:24:34.783 "transport_ack_timeout": 0, 00:24:34.783 "ctrlr_loss_timeout_sec": 0, 00:24:34.783 "reconnect_delay_sec": 0, 00:24:34.783 "fast_io_fail_timeout_sec": 0, 00:24:34.783 "disable_auto_failback": false, 00:24:34.783 "generate_uuids": false, 00:24:34.783 "transport_tos": 0, 00:24:34.783 "nvme_error_stat": false, 00:24:34.783 "rdma_srq_size": 0, 00:24:34.783 "io_path_stat": false, 00:24:34.783 "allow_accel_sequence": false, 00:24:34.783 "rdma_max_cq_size": 0, 00:24:34.783 "rdma_cm_event_timeout_ms": 0, 00:24:34.783 "dhchap_digests": [ 00:24:34.783 "sha256", 00:24:34.783 "sha384", 00:24:34.783 "sha512" 00:24:34.783 ], 00:24:34.783 "dhchap_dhgroups": [ 00:24:34.783 "null", 00:24:34.783 "ffdhe2048", 00:24:34.783 "ffdhe3072", 00:24:34.783 "ffdhe4096", 00:24:34.783 "ffdhe6144", 00:24:34.783 "ffdhe8192" 00:24:34.783 ] 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "bdev_nvme_set_hotplug", 00:24:34.783 "params": { 00:24:34.783 "period_us": 100000, 00:24:34.783 "enable": false 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "bdev_malloc_create", 00:24:34.783 "params": { 00:24:34.783 "name": "malloc0", 00:24:34.783 "num_blocks": 8192, 00:24:34.783 "block_size": 4096, 00:24:34.783 "physical_block_size": 4096, 00:24:34.783 "uuid": "e43121e3-5090-4b10-b3d4-c9e92daa50e2", 00:24:34.783 "optimal_io_boundary": 0, 00:24:34.783 "md_size": 0, 00:24:34.783 "dif_type": 0, 00:24:34.783 "dif_is_head_of_md": false, 00:24:34.783 "dif_pi_format": 0 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "bdev_wait_for_examine" 00:24:34.783 } 00:24:34.783 ] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "nbd", 00:24:34.783 "config": [] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "scheduler", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "framework_set_scheduler", 00:24:34.783 "params": { 00:24:34.783 "name": "static" 00:24:34.783 } 00:24:34.783 } 00:24:34.783 ] 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "subsystem": "nvmf", 00:24:34.783 "config": [ 00:24:34.783 { 00:24:34.783 "method": "nvmf_set_config", 00:24:34.783 "params": { 00:24:34.783 "discovery_filter": "match_any", 00:24:34.783 "admin_cmd_passthru": { 00:24:34.783 "identify_ctrlr": false 00:24:34.783 }, 00:24:34.783 "dhchap_digests": [ 00:24:34.783 "sha256", 00:24:34.783 "sha384", 00:24:34.783 "sha512" 00:24:34.783 ], 00:24:34.783 "dhchap_dhgroups": [ 00:24:34.783 "null", 00:24:34.783 "ffdhe2048", 00:24:34.783 "ffdhe3072", 00:24:34.783 "ffdhe4096", 00:24:34.783 "ffdhe6144", 00:24:34.783 "ffdhe8192" 00:24:34.783 ] 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "nvmf_set_max_subsystems", 00:24:34.783 "params": { 00:24:34.783 "max_subsystems": 1024 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "nvmf_set_crdt", 00:24:34.783 "params": { 00:24:34.783 "crdt1": 0, 00:24:34.783 "crdt2": 0, 00:24:34.783 "crdt3": 0 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "nvmf_create_transport", 00:24:34.783 "params": { 00:24:34.783 "trtype": "TCP", 00:24:34.783 "max_queue_depth": 128, 00:24:34.783 "max_io_qpairs_per_ctrlr": 127, 00:24:34.783 "in_capsule_data_size": 4096, 00:24:34.783 "max_io_size": 131072, 00:24:34.783 "io_unit_size": 131072, 00:24:34.783 "max_aq_depth": 128, 00:24:34.783 "num_shared_buffers": 511, 00:24:34.783 "buf_cache_size": 4294967295, 00:24:34.783 "dif_insert_or_strip": false, 00:24:34.783 "zcopy": false, 00:24:34.783 "c2h_success": false, 00:24:34.783 "sock_priority": 0, 00:24:34.783 "abort_timeout_sec": 1, 00:24:34.783 "ack_timeout": 0, 00:24:34.783 "data_wr_pool_size": 0 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "nvmf_create_subsystem", 00:24:34.783 "params": { 00:24:34.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.783 "allow_any_host": false, 00:24:34.783 "serial_number": "00000000000000000000", 00:24:34.783 "model_number": "SPDK bdev Controller", 00:24:34.783 "max_namespaces": 32, 00:24:34.783 "min_cntlid": 1, 00:24:34.783 "max_cntlid": 65519, 00:24:34.783 "ana_reporting": false 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "nvmf_subsystem_add_host", 00:24:34.783 "params": { 00:24:34.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.783 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.783 "psk": "key0" 00:24:34.783 } 00:24:34.783 }, 00:24:34.783 { 00:24:34.783 "method": "nvmf_subsystem_add_ns", 00:24:34.783 "params": { 00:24:34.783 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.783 "namespace": { 00:24:34.783 "nsid": 1, 00:24:34.783 "bdev_name": "malloc0", 00:24:34.784 "nguid": "E43121E350904B10B3D4C9E92DAA50E2", 00:24:34.784 "uuid": "e43121e3-5090-4b10-b3d4-c9e92daa50e2", 00:24:34.784 "no_auto_visible": false 00:24:34.784 } 00:24:34.784 } 00:24:34.784 }, 00:24:34.784 { 00:24:34.784 "method": "nvmf_subsystem_add_listener", 00:24:34.784 "params": { 00:24:34.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.784 "listen_address": { 00:24:34.784 "trtype": "TCP", 00:24:34.784 "adrfam": "IPv4", 00:24:34.784 "traddr": "10.0.0.2", 00:24:34.784 "trsvcid": "4420" 00:24:34.784 }, 00:24:34.784 "secure_channel": false, 00:24:34.784 "sock_impl": "ssl" 00:24:34.784 } 00:24:34.784 } 00:24:34.784 ] 00:24:34.784 } 00:24:34.784 ] 00:24:34.784 }' 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=279964 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 279964 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279964 ']' 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.784 11:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.784 [2024-11-17 11:18:59.235849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:34.784 [2024-11-17 11:18:59.235982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.784 [2024-11-17 11:18:59.308107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.784 [2024-11-17 11:18:59.355740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.784 [2024-11-17 11:18:59.355813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.784 [2024-11-17 11:18:59.355841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.784 [2024-11-17 11:18:59.355852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.784 [2024-11-17 11:18:59.355862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.784 [2024-11-17 11:18:59.356517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.042 [2024-11-17 11:18:59.591470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.042 [2024-11-17 11:18:59.623536] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.042 [2024-11-17 11:18:59.623844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.606 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.606 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:35.606 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.606 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.606 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=280115 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 280115 /var/tmp/bdevperf.sock 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280115 ']' 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.864 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:35.864 "subsystems": [ 00:24:35.864 { 00:24:35.864 "subsystem": "keyring", 00:24:35.864 "config": [ 00:24:35.864 { 00:24:35.864 "method": "keyring_file_add_key", 00:24:35.864 "params": { 00:24:35.864 "name": "key0", 00:24:35.864 "path": "/tmp/tmp.3uUOT2XQVf" 00:24:35.864 } 00:24:35.864 } 00:24:35.864 ] 00:24:35.864 }, 00:24:35.864 { 00:24:35.864 "subsystem": "iobuf", 00:24:35.864 "config": [ 00:24:35.864 { 00:24:35.864 "method": "iobuf_set_options", 00:24:35.864 "params": { 00:24:35.864 "small_pool_count": 8192, 00:24:35.864 "large_pool_count": 1024, 00:24:35.864 "small_bufsize": 8192, 00:24:35.865 "large_bufsize": 135168, 00:24:35.865 "enable_numa": false 00:24:35.865 } 00:24:35.865 } 00:24:35.865 ] 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "subsystem": "sock", 00:24:35.865 "config": [ 00:24:35.865 { 00:24:35.865 "method": "sock_set_default_impl", 00:24:35.865 "params": { 00:24:35.865 "impl_name": "posix" 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "sock_impl_set_options", 00:24:35.865 "params": { 00:24:35.865 "impl_name": "ssl", 00:24:35.865 "recv_buf_size": 4096, 00:24:35.865 "send_buf_size": 4096, 00:24:35.865 "enable_recv_pipe": true, 00:24:35.865 "enable_quickack": false, 00:24:35.865 "enable_placement_id": 0, 00:24:35.865 "enable_zerocopy_send_server": true, 00:24:35.865 "enable_zerocopy_send_client": false, 00:24:35.865 "zerocopy_threshold": 0, 00:24:35.865 "tls_version": 0, 00:24:35.865 "enable_ktls": false 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "sock_impl_set_options", 00:24:35.865 "params": { 00:24:35.865 "impl_name": "posix", 00:24:35.865 "recv_buf_size": 2097152, 00:24:35.865 "send_buf_size": 2097152, 00:24:35.865 "enable_recv_pipe": true, 00:24:35.865 "enable_quickack": false, 00:24:35.865 "enable_placement_id": 0, 00:24:35.865 "enable_zerocopy_send_server": true, 00:24:35.865 "enable_zerocopy_send_client": false, 00:24:35.865 "zerocopy_threshold": 0, 00:24:35.865 "tls_version": 0, 00:24:35.865 "enable_ktls": false 00:24:35.865 } 00:24:35.865 } 00:24:35.865 ] 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "subsystem": "vmd", 00:24:35.865 "config": [] 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "subsystem": "accel", 00:24:35.865 "config": [ 00:24:35.865 { 00:24:35.865 "method": "accel_set_options", 00:24:35.865 "params": { 00:24:35.865 "small_cache_size": 128, 00:24:35.865 "large_cache_size": 16, 00:24:35.865 "task_count": 2048, 00:24:35.865 "sequence_count": 2048, 00:24:35.865 "buf_count": 2048 00:24:35.865 } 00:24:35.865 } 00:24:35.865 ] 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "subsystem": "bdev", 00:24:35.865 "config": [ 00:24:35.865 { 00:24:35.865 "method": "bdev_set_options", 00:24:35.865 "params": { 00:24:35.865 "bdev_io_pool_size": 65535, 00:24:35.865 "bdev_io_cache_size": 256, 00:24:35.865 "bdev_auto_examine": true, 00:24:35.865 "iobuf_small_cache_size": 128, 00:24:35.865 "iobuf_large_cache_size": 16 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_raid_set_options", 00:24:35.865 "params": { 00:24:35.865 "process_window_size_kb": 1024, 00:24:35.865 "process_max_bandwidth_mb_sec": 0 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_iscsi_set_options", 00:24:35.865 "params": { 00:24:35.865 "timeout_sec": 30 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_nvme_set_options", 00:24:35.865 "params": { 00:24:35.865 "action_on_timeout": "none", 00:24:35.865 "timeout_us": 0, 00:24:35.865 "timeout_admin_us": 0, 00:24:35.865 "keep_alive_timeout_ms": 10000, 00:24:35.865 "arbitration_burst": 0, 00:24:35.865 "low_priority_weight": 0, 00:24:35.865 "medium_priority_weight": 0, 00:24:35.865 "high_priority_weight": 0, 00:24:35.865 "nvme_adminq_poll_period_us": 10000, 00:24:35.865 "nvme_ioq_poll_period_us": 0, 00:24:35.865 "io_queue_requests": 512, 00:24:35.865 "delay_cmd_submit": true, 00:24:35.865 "transport_retry_count": 4, 00:24:35.865 "bdev_retry_count": 3, 00:24:35.865 "transport_ack_timeout": 0, 00:24:35.865 "ctrlr_loss_timeout_sec": 0, 00:24:35.865 "reconnect_delay_sec": 0, 00:24:35.865 "fast_io_fail_timeout_sec": 0, 00:24:35.865 "disable_auto_failback": false, 00:24:35.865 "generate_uuids": false, 00:24:35.865 "transport_tos": 0, 00:24:35.865 "nvme_error_stat": false, 00:24:35.865 "rdma_srq_size": 0, 00:24:35.865 "io_path_stat": false, 00:24:35.865 "allow_accel_sequence": false, 00:24:35.865 "rdma_max_cq_size": 0, 00:24:35.865 "rdma_cm_event_timeout_ms": 0, 00:24:35.865 "dhchap_digests": [ 00:24:35.865 "sha256", 00:24:35.865 "sha384", 00:24:35.865 "sha512" 00:24:35.865 ], 00:24:35.865 "dhchap_dhgroups": [ 00:24:35.865 "null", 00:24:35.865 "ffdhe2048", 00:24:35.865 "ffdhe3072", 00:24:35.865 "ffdhe4096", 00:24:35.865 "ffdhe6144", 00:24:35.865 "ffdhe8192" 00:24:35.865 ] 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_nvme_attach_controller", 00:24:35.865 "params": { 00:24:35.865 "name": "nvme0", 00:24:35.865 "trtype": "TCP", 00:24:35.865 "adrfam": "IPv4", 00:24:35.865 "traddr": "10.0.0.2", 00:24:35.865 "trsvcid": "4420", 00:24:35.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.865 "prchk_reftag": false, 00:24:35.865 "prchk_guard": false, 00:24:35.865 "ctrlr_loss_timeout_sec": 0, 00:24:35.865 "reconnect_delay_sec": 0, 00:24:35.865 "fast_io_fail_timeout_sec": 0, 00:24:35.865 "psk": "key0", 00:24:35.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.865 "hdgst": false, 00:24:35.865 "ddgst": false, 00:24:35.865 "multipath": "multipath" 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_nvme_set_hotplug", 00:24:35.865 "params": { 00:24:35.865 "period_us": 100000, 00:24:35.865 "enable": false 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_enable_histogram", 00:24:35.865 "params": { 00:24:35.865 "name": "nvme0n1", 00:24:35.865 "enable": true 00:24:35.865 } 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "method": "bdev_wait_for_examine" 00:24:35.865 } 00:24:35.865 ] 00:24:35.865 }, 00:24:35.865 { 00:24:35.865 "subsystem": "nbd", 00:24:35.865 "config": [] 00:24:35.865 } 00:24:35.865 ] 00:24:35.865 }' 00:24:35.865 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.865 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.865 [2024-11-17 11:19:00.329750] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:35.865 [2024-11-17 11:19:00.329834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280115 ] 00:24:35.865 [2024-11-17 11:19:00.397545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.865 [2024-11-17 11:19:00.445306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.123 [2024-11-17 11:19:00.624859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.123 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.123 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:36.123 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.123 11:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:36.381 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.381 11:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.639 Running I/O for 1 seconds... 00:24:37.572 3120.00 IOPS, 12.19 MiB/s 00:24:37.572 Latency(us) 00:24:37.572 [2024-11-17T10:19:02.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.572 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:37.572 Verification LBA range: start 0x0 length 0x2000 00:24:37.572 nvme0n1 : 1.02 3183.74 12.44 0.00 0.00 39854.20 6019.60 43496.49 00:24:37.572 [2024-11-17T10:19:02.230Z] =================================================================================================================== 00:24:37.572 [2024-11-17T10:19:02.230Z] Total : 3183.74 12.44 0.00 0.00 39854.20 6019.60 43496.49 00:24:37.572 { 00:24:37.572 "results": [ 00:24:37.572 { 00:24:37.572 "job": "nvme0n1", 00:24:37.572 "core_mask": "0x2", 00:24:37.572 "workload": "verify", 00:24:37.572 "status": "finished", 00:24:37.572 "verify_range": { 00:24:37.572 "start": 0, 00:24:37.572 "length": 8192 00:24:37.572 }, 00:24:37.572 "queue_depth": 128, 00:24:37.572 "io_size": 4096, 00:24:37.572 "runtime": 1.020183, 00:24:37.572 "iops": 3183.742524625484, 00:24:37.572 "mibps": 12.436494236818296, 00:24:37.572 "io_failed": 0, 00:24:37.572 "io_timeout": 0, 00:24:37.572 "avg_latency_us": 39854.19956942164, 00:24:37.572 "min_latency_us": 6019.602962962963, 00:24:37.572 "max_latency_us": 43496.485925925925 00:24:37.572 } 00:24:37.572 ], 00:24:37.572 "core_count": 1 00:24:37.572 } 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:37.572 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:37.572 nvmf_trace.0 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 280115 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280115 ']' 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280115 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280115 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280115' 00:24:37.831 killing process with pid 280115 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280115 00:24:37.831 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.831 00:24:37.831 Latency(us) 00:24:37.831 [2024-11-17T10:19:02.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.831 [2024-11-17T10:19:02.489Z] =================================================================================================================== 00:24:37.831 [2024-11-17T10:19:02.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280115 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.831 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.831 rmmod nvme_tcp 00:24:38.090 rmmod nvme_fabrics 00:24:38.090 rmmod nvme_keyring 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 279964 ']' 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 279964 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279964 ']' 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279964 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279964 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279964' 00:24:38.090 killing process with pid 279964 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279964 00:24:38.090 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279964 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.348 11:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bSwUKckQhQ /tmp/tmp.GCHPqIA7jc /tmp/tmp.3uUOT2XQVf 00:24:40.251 00:24:40.251 real 1m21.938s 00:24:40.251 user 2m18.526s 00:24:40.251 sys 0m24.100s 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.251 ************************************ 00:24:40.251 END TEST nvmf_tls 00:24:40.251 ************************************ 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.251 ************************************ 00:24:40.251 START TEST nvmf_fips 00:24:40.251 ************************************ 00:24:40.251 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:40.511 * Looking for test storage... 00:24:40.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:40.511 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:40.511 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:40.511 11:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.511 --rc genhtml_branch_coverage=1 00:24:40.511 --rc genhtml_function_coverage=1 00:24:40.511 --rc genhtml_legend=1 00:24:40.511 --rc geninfo_all_blocks=1 00:24:40.511 --rc geninfo_unexecuted_blocks=1 00:24:40.511 00:24:40.511 ' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.511 --rc genhtml_branch_coverage=1 00:24:40.511 --rc genhtml_function_coverage=1 00:24:40.511 --rc genhtml_legend=1 00:24:40.511 --rc geninfo_all_blocks=1 00:24:40.511 --rc geninfo_unexecuted_blocks=1 00:24:40.511 00:24:40.511 ' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.511 --rc genhtml_branch_coverage=1 00:24:40.511 --rc genhtml_function_coverage=1 00:24:40.511 --rc genhtml_legend=1 00:24:40.511 --rc geninfo_all_blocks=1 00:24:40.511 --rc geninfo_unexecuted_blocks=1 00:24:40.511 00:24:40.511 ' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:40.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.511 --rc genhtml_branch_coverage=1 00:24:40.511 --rc genhtml_function_coverage=1 00:24:40.511 --rc genhtml_legend=1 00:24:40.511 --rc geninfo_all_blocks=1 00:24:40.511 --rc geninfo_unexecuted_blocks=1 00:24:40.511 00:24:40.511 ' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.511 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:40.512 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:40.770 Error setting digest 00:24:40.770 40726956DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:40.770 40726956DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.770 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.771 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.771 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.771 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.771 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.771 11:19:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.302 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:24:43.303 00:24:43.303 --- 10.0.0.2 ping statistics --- 00:24:43.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.303 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:24:43.303 00:24:43.303 --- 10.0.0.1 ping statistics --- 00:24:43.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.303 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=282474 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 282474 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 282474 ']' 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.303 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.304 [2024-11-17 11:19:07.720661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:43.304 [2024-11-17 11:19:07.720742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.304 [2024-11-17 11:19:07.790461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.304 [2024-11-17 11:19:07.834233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.304 [2024-11-17 11:19:07.834289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.304 [2024-11-17 11:19:07.834317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.304 [2024-11-17 11:19:07.834329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.304 [2024-11-17 11:19:07.834339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.304 [2024-11-17 11:19:07.834911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.304 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.0Qz 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.0Qz 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.0Qz 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.0Qz 00:24:43.562 11:19:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.562 [2024-11-17 11:19:08.215413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.821 [2024-11-17 11:19:08.231408] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.821 [2024-11-17 11:19:08.231697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.821 malloc0 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=282506 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 282506 /var/tmp/bdevperf.sock 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 282506 ']' 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.821 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.821 [2024-11-17 11:19:08.363753] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:24:43.821 [2024-11-17 11:19:08.363843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid282506 ] 00:24:43.821 [2024-11-17 11:19:08.427759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.821 [2024-11-17 11:19:08.472340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.080 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.080 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:44.080 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.0Qz 00:24:44.338 11:19:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:44.596 [2024-11-17 11:19:09.114270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.596 TLSTESTn1 00:24:44.596 11:19:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.854 Running I/O for 10 seconds... 00:24:46.721 3202.00 IOPS, 12.51 MiB/s [2024-11-17T10:19:12.752Z] 3269.50 IOPS, 12.77 MiB/s [2024-11-17T10:19:13.684Z] 3282.67 IOPS, 12.82 MiB/s [2024-11-17T10:19:14.615Z] 3328.75 IOPS, 13.00 MiB/s [2024-11-17T10:19:15.547Z] 3329.80 IOPS, 13.01 MiB/s [2024-11-17T10:19:16.480Z] 3344.17 IOPS, 13.06 MiB/s [2024-11-17T10:19:17.414Z] 3351.71 IOPS, 13.09 MiB/s [2024-11-17T10:19:18.347Z] 3355.38 IOPS, 13.11 MiB/s [2024-11-17T10:19:19.721Z] 3356.56 IOPS, 13.11 MiB/s [2024-11-17T10:19:19.721Z] 3364.00 IOPS, 13.14 MiB/s 00:24:55.063 Latency(us) 00:24:55.063 [2024-11-17T10:19:19.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.063 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:55.063 Verification LBA range: start 0x0 length 0x2000 00:24:55.063 TLSTESTn1 : 10.02 3370.85 13.17 0.00 0.00 37915.80 6650.69 37476.88 00:24:55.063 [2024-11-17T10:19:19.721Z] =================================================================================================================== 00:24:55.063 [2024-11-17T10:19:19.721Z] Total : 3370.85 13.17 0.00 0.00 37915.80 6650.69 37476.88 00:24:55.063 { 00:24:55.063 "results": [ 00:24:55.063 { 00:24:55.063 "job": "TLSTESTn1", 00:24:55.063 "core_mask": "0x4", 00:24:55.063 "workload": "verify", 00:24:55.063 "status": "finished", 00:24:55.063 "verify_range": { 00:24:55.063 "start": 0, 00:24:55.063 "length": 8192 00:24:55.063 }, 00:24:55.063 "queue_depth": 128, 00:24:55.063 "io_size": 4096, 00:24:55.063 "runtime": 10.017364, 00:24:55.063 "iops": 3370.8468615096745, 00:24:55.063 "mibps": 13.167370552772166, 00:24:55.063 "io_failed": 0, 00:24:55.063 "io_timeout": 0, 00:24:55.063 "avg_latency_us": 37915.80449849678, 00:24:55.063 "min_latency_us": 6650.69037037037, 00:24:55.063 "max_latency_us": 37476.88296296296 00:24:55.063 } 00:24:55.063 ], 00:24:55.063 "core_count": 1 00:24:55.063 } 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:55.063 nvmf_trace.0 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 282506 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 282506 ']' 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 282506 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282506 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282506' 00:24:55.063 killing process with pid 282506 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 282506 00:24:55.063 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.063 00:24:55.063 Latency(us) 00:24:55.063 [2024-11-17T10:19:19.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.063 [2024-11-17T10:19:19.721Z] =================================================================================================================== 00:24:55.063 [2024-11-17T10:19:19.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 282506 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.063 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.063 rmmod nvme_tcp 00:24:55.063 rmmod nvme_fabrics 00:24:55.063 rmmod nvme_keyring 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 282474 ']' 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 282474 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 282474 ']' 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 282474 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282474 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282474' 00:24:55.324 killing process with pid 282474 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 282474 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 282474 00:24:55.324 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.582 11:19:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.0Qz 00:24:57.485 00:24:57.485 real 0m17.141s 00:24:57.485 user 0m22.476s 00:24:57.485 sys 0m5.461s 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.485 ************************************ 00:24:57.485 END TEST nvmf_fips 00:24:57.485 ************************************ 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:57.485 ************************************ 00:24:57.485 START TEST nvmf_control_msg_list 00:24:57.485 ************************************ 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.485 * Looking for test storage... 00:24:57.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.485 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.745 --rc genhtml_branch_coverage=1 00:24:57.745 --rc genhtml_function_coverage=1 00:24:57.745 --rc genhtml_legend=1 00:24:57.745 --rc geninfo_all_blocks=1 00:24:57.745 --rc geninfo_unexecuted_blocks=1 00:24:57.745 00:24:57.745 ' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.745 --rc genhtml_branch_coverage=1 00:24:57.745 --rc genhtml_function_coverage=1 00:24:57.745 --rc genhtml_legend=1 00:24:57.745 --rc geninfo_all_blocks=1 00:24:57.745 --rc geninfo_unexecuted_blocks=1 00:24:57.745 00:24:57.745 ' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.745 --rc genhtml_branch_coverage=1 00:24:57.745 --rc genhtml_function_coverage=1 00:24:57.745 --rc genhtml_legend=1 00:24:57.745 --rc geninfo_all_blocks=1 00:24:57.745 --rc geninfo_unexecuted_blocks=1 00:24:57.745 00:24:57.745 ' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.745 --rc genhtml_branch_coverage=1 00:24:57.745 --rc genhtml_function_coverage=1 00:24:57.745 --rc genhtml_legend=1 00:24:57.745 --rc geninfo_all_blocks=1 00:24:57.745 --rc geninfo_unexecuted_blocks=1 00:24:57.745 00:24:57.745 ' 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.745 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.746 11:19:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:00.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:00.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:00.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:00.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.281 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:25:00.282 00:25:00.282 --- 10.0.0.2 ping statistics --- 00:25:00.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.282 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:25:00.282 00:25:00.282 --- 10.0.0.1 ping statistics --- 00:25:00.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.282 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=285764 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 285764 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 285764 ']' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 [2024-11-17 11:19:24.572638] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:00.282 [2024-11-17 11:19:24.572736] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.282 [2024-11-17 11:19:24.645501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.282 [2024-11-17 11:19:24.690865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.282 [2024-11-17 11:19:24.690925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.282 [2024-11-17 11:19:24.690953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.282 [2024-11-17 11:19:24.690965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.282 [2024-11-17 11:19:24.690975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.282 [2024-11-17 11:19:24.691604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 [2024-11-17 11:19:24.836044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 Malloc0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.282 [2024-11-17 11:19:24.875728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=285906 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=285907 00:25:00.282 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.283 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=285908 00:25:00.283 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 285906 00:25:00.283 11:19:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.283 [2024-11-17 11:19:24.934226] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.540 [2024-11-17 11:19:24.944266] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.541 [2024-11-17 11:19:24.944490] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:01.474 Initializing NVMe Controllers 00:25:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:01.474 Initialization complete. Launching workers. 00:25:01.474 ======================================================== 00:25:01.474 Latency(us) 00:25:01.474 Device Information : IOPS MiB/s Average min max 00:25:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40908.46 40851.82 41099.37 00:25:01.474 ======================================================== 00:25:01.474 Total : 25.00 0.10 40908.46 40851.82 41099.37 00:25:01.474 00:25:01.474 Initializing NVMe Controllers 00:25:01.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:01.474 Initialization complete. Launching workers. 00:25:01.474 ======================================================== 00:25:01.474 Latency(us) 00:25:01.474 Device Information : IOPS MiB/s Average min max 00:25:01.474 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40873.19 40327.35 40934.46 00:25:01.474 ======================================================== 00:25:01.474 Total : 25.00 0.10 40873.19 40327.35 40934.46 00:25:01.474 00:25:01.474 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 285907 00:25:01.733 Initializing NVMe Controllers 00:25:01.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:01.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:01.733 Initialization complete. Launching workers. 00:25:01.733 ======================================================== 00:25:01.733 Latency(us) 00:25:01.733 Device Information : IOPS MiB/s Average min max 00:25:01.733 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40896.48 40764.19 40951.18 00:25:01.733 ======================================================== 00:25:01.733 Total : 25.00 0.10 40896.48 40764.19 40951.18 00:25:01.733 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 285908 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:01.733 rmmod nvme_tcp 00:25:01.733 rmmod nvme_fabrics 00:25:01.733 rmmod nvme_keyring 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 285764 ']' 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 285764 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 285764 ']' 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 285764 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285764 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285764' 00:25:01.733 killing process with pid 285764 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 285764 00:25:01.733 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 285764 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.993 11:19:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:04.530 00:25:04.530 real 0m6.513s 00:25:04.530 user 0m6.096s 00:25:04.530 sys 0m2.484s 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:04.530 ************************************ 00:25:04.530 END TEST nvmf_control_msg_list 00:25:04.530 ************************************ 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:04.530 ************************************ 00:25:04.530 START TEST nvmf_wait_for_buf 00:25:04.530 ************************************ 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:04.530 * Looking for test storage... 00:25:04.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:04.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.530 --rc genhtml_branch_coverage=1 00:25:04.530 --rc genhtml_function_coverage=1 00:25:04.530 --rc genhtml_legend=1 00:25:04.530 --rc geninfo_all_blocks=1 00:25:04.530 --rc geninfo_unexecuted_blocks=1 00:25:04.530 00:25:04.530 ' 00:25:04.530 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:04.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.530 --rc genhtml_branch_coverage=1 00:25:04.530 --rc genhtml_function_coverage=1 00:25:04.530 --rc genhtml_legend=1 00:25:04.530 --rc geninfo_all_blocks=1 00:25:04.530 --rc geninfo_unexecuted_blocks=1 00:25:04.531 00:25:04.531 ' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:04.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.531 --rc genhtml_branch_coverage=1 00:25:04.531 --rc genhtml_function_coverage=1 00:25:04.531 --rc genhtml_legend=1 00:25:04.531 --rc geninfo_all_blocks=1 00:25:04.531 --rc geninfo_unexecuted_blocks=1 00:25:04.531 00:25:04.531 ' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:04.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:04.531 --rc genhtml_branch_coverage=1 00:25:04.531 --rc genhtml_function_coverage=1 00:25:04.531 --rc genhtml_legend=1 00:25:04.531 --rc geninfo_all_blocks=1 00:25:04.531 --rc geninfo_unexecuted_blocks=1 00:25:04.531 00:25:04.531 ' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:04.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:04.531 11:19:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:06.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:06.432 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:06.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:06.433 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:06.433 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.433 11:19:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:06.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:25:06.433 00:25:06.433 --- 10.0.0.2 ping statistics --- 00:25:06.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.433 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:25:06.433 00:25:06.433 --- 10.0.0.1 ping statistics --- 00:25:06.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.433 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:06.433 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=287987 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 287987 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 287987 ']' 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.693 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.693 [2024-11-17 11:19:31.135995] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:06.693 [2024-11-17 11:19:31.136067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.693 [2024-11-17 11:19:31.205935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.693 [2024-11-17 11:19:31.250273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.693 [2024-11-17 11:19:31.250341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.693 [2024-11-17 11:19:31.250368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.693 [2024-11-17 11:19:31.250379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.693 [2024-11-17 11:19:31.250388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.693 [2024-11-17 11:19:31.251028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 Malloc0 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 [2024-11-17 11:19:31.493617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.952 [2024-11-17 11:19:31.517874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.952 11:19:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.953 [2024-11-17 11:19:31.598666] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:08.327 Initializing NVMe Controllers 00:25:08.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:08.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:08.327 Initialization complete. Launching workers. 00:25:08.327 ======================================================== 00:25:08.327 Latency(us) 00:25:08.327 Device Information : IOPS MiB/s Average min max 00:25:08.327 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 109.00 13.62 38180.61 7996.91 71839.07 00:25:08.327 ======================================================== 00:25:08.327 Total : 109.00 13.62 38180.61 7996.91 71839.07 00:25:08.327 00:25:08.585 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:08.585 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:08.585 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.585 11:19:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1718 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1718 -eq 0 ]] 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.585 rmmod nvme_tcp 00:25:08.585 rmmod nvme_fabrics 00:25:08.585 rmmod nvme_keyring 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 287987 ']' 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 287987 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 287987 ']' 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 287987 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287987 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287987' 00:25:08.585 killing process with pid 287987 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 287987 00:25:08.585 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 287987 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.846 11:19:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.756 00:25:10.756 real 0m6.697s 00:25:10.756 user 0m3.121s 00:25:10.756 sys 0m2.035s 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.756 ************************************ 00:25:10.756 END TEST nvmf_wait_for_buf 00:25:10.756 ************************************ 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:10.756 ************************************ 00:25:10.756 START TEST nvmf_fuzz 00:25:10.756 ************************************ 00:25:10.756 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:11.016 * Looking for test storage... 00:25:11.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.016 --rc genhtml_branch_coverage=1 00:25:11.016 --rc genhtml_function_coverage=1 00:25:11.016 --rc genhtml_legend=1 00:25:11.016 --rc geninfo_all_blocks=1 00:25:11.016 --rc geninfo_unexecuted_blocks=1 00:25:11.016 00:25:11.016 ' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.016 --rc genhtml_branch_coverage=1 00:25:11.016 --rc genhtml_function_coverage=1 00:25:11.016 --rc genhtml_legend=1 00:25:11.016 --rc geninfo_all_blocks=1 00:25:11.016 --rc geninfo_unexecuted_blocks=1 00:25:11.016 00:25:11.016 ' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.016 --rc genhtml_branch_coverage=1 00:25:11.016 --rc genhtml_function_coverage=1 00:25:11.016 --rc genhtml_legend=1 00:25:11.016 --rc geninfo_all_blocks=1 00:25:11.016 --rc geninfo_unexecuted_blocks=1 00:25:11.016 00:25:11.016 ' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.016 --rc genhtml_branch_coverage=1 00:25:11.016 --rc genhtml_function_coverage=1 00:25:11.016 --rc genhtml_legend=1 00:25:11.016 --rc geninfo_all_blocks=1 00:25:11.016 --rc geninfo_unexecuted_blocks=1 00:25:11.016 00:25:11.016 ' 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.016 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.017 11:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.558 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.559 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.559 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.560 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.560 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.560 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.560 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.561 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:25:13.562 00:25:13.562 --- 10.0.0.2 ping statistics --- 00:25:13.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.562 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:25:13.562 00:25:13.562 --- 10.0.0.1 ping statistics --- 00:25:13.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.562 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=290196 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:13.562 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 290196 00:25:13.563 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 290196 ']' 00:25:13.563 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.563 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.563 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.563 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.563 11:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.563 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 Malloc0 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:13.827 11:19:38 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:45.887 Fuzzing completed. Shutting down the fuzz application 00:25:45.887 00:25:45.887 Dumping successful admin opcodes: 00:25:45.887 8, 9, 10, 24, 00:25:45.887 Dumping successful io opcodes: 00:25:45.887 0, 9, 00:25:45.888 NS: 0x2000008eff00 I/O qp, Total commands completed: 509256, total successful commands: 2937, random_seed: 3567248448 00:25:45.888 NS: 0x2000008eff00 admin qp, Total commands completed: 61744, total successful commands: 488, random_seed: 4096521920 00:25:45.888 11:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:45.888 Fuzzing completed. Shutting down the fuzz application 00:25:45.888 00:25:45.888 Dumping successful admin opcodes: 00:25:45.888 24, 00:25:45.888 Dumping successful io opcodes: 00:25:45.888 00:25:45.888 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2541144546 00:25:45.888 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2541255876 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:45.888 rmmod nvme_tcp 00:25:45.888 rmmod nvme_fabrics 00:25:45.888 rmmod nvme_keyring 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 290196 ']' 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 290196 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 290196 ']' 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 290196 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290196 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290196' 00:25:45.888 killing process with pid 290196 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 290196 00:25:45.888 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 290196 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.146 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.147 11:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:48.054 00:25:48.054 real 0m37.273s 00:25:48.054 user 0m52.190s 00:25:48.054 sys 0m14.038s 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:48.054 ************************************ 00:25:48.054 END TEST nvmf_fuzz 00:25:48.054 ************************************ 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:48.054 11:20:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:48.314 ************************************ 00:25:48.314 START TEST nvmf_multiconnection 00:25:48.314 ************************************ 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:48.314 * Looking for test storage... 00:25:48.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:48.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.314 --rc genhtml_branch_coverage=1 00:25:48.314 --rc genhtml_function_coverage=1 00:25:48.314 --rc genhtml_legend=1 00:25:48.314 --rc geninfo_all_blocks=1 00:25:48.314 --rc geninfo_unexecuted_blocks=1 00:25:48.314 00:25:48.314 ' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:48.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.314 --rc genhtml_branch_coverage=1 00:25:48.314 --rc genhtml_function_coverage=1 00:25:48.314 --rc genhtml_legend=1 00:25:48.314 --rc geninfo_all_blocks=1 00:25:48.314 --rc geninfo_unexecuted_blocks=1 00:25:48.314 00:25:48.314 ' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:48.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.314 --rc genhtml_branch_coverage=1 00:25:48.314 --rc genhtml_function_coverage=1 00:25:48.314 --rc genhtml_legend=1 00:25:48.314 --rc geninfo_all_blocks=1 00:25:48.314 --rc geninfo_unexecuted_blocks=1 00:25:48.314 00:25:48.314 ' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:48.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.314 --rc genhtml_branch_coverage=1 00:25:48.314 --rc genhtml_function_coverage=1 00:25:48.314 --rc genhtml_legend=1 00:25:48.314 --rc geninfo_all_blocks=1 00:25:48.314 --rc geninfo_unexecuted_blocks=1 00:25:48.314 00:25:48.314 ' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.314 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:48.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:48.315 11:20:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:50.849 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:50.849 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.849 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:50.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:50.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.850 11:20:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:25:50.850 00:25:50.850 --- 10.0.0.2 ping statistics --- 00:25:50.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.850 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:50.850 00:25:50.850 --- 10.0.0.1 ping statistics --- 00:25:50.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.850 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=295927 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 295927 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 295927 ']' 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.850 [2024-11-17 11:20:15.147084] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:25:50.850 [2024-11-17 11:20:15.147165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.850 [2024-11-17 11:20:15.217128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.850 [2024-11-17 11:20:15.261607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.850 [2024-11-17 11:20:15.261668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.850 [2024-11-17 11:20:15.261697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.850 [2024-11-17 11:20:15.261708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.850 [2024-11-17 11:20:15.261718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.850 [2024-11-17 11:20:15.263205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.850 [2024-11-17 11:20:15.263267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.850 [2024-11-17 11:20:15.263334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.850 [2024-11-17 11:20:15.263336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.850 [2024-11-17 11:20:15.409928] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.850 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.850 Malloc1 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.851 [2024-11-17 11:20:15.480718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.851 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.109 Malloc2 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.109 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 Malloc3 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 Malloc4 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 Malloc5 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 Malloc6 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 Malloc7 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.110 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 Malloc8 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:51.369 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 Malloc9 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 Malloc10 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 Malloc11 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.370 11:20:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:51.937 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:51.937 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:51.937 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:51.937 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:51.937 11:20:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.464 11:20:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:54.727 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:54.728 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:54.728 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.728 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:54.728 11:20:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.258 11:20:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:57.516 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:57.516 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:57.516 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.516 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:57.516 11:20:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:59.414 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:00.347 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:00.347 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:00.347 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:00.347 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:00.347 11:20:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:02.246 11:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:03.179 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:03.179 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:03.179 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:03.179 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:03.179 11:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:05.079 11:20:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:05.646 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:05.646 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.646 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.646 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.646 11:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.175 11:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:08.740 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:08.740 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:08.740 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:08.740 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:08.740 11:20:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:10.638 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:11.571 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:11.571 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:11.571 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:11.571 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:11.571 11:20:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.468 11:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:14.401 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:14.401 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:14.401 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.401 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:14.401 11:20:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.300 11:20:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:17.234 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:17.234 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:17.234 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.234 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:17.234 11:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.133 11:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:20.067 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:20.067 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:20.067 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:20.067 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:20.067 11:20:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.964 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.965 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.965 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:21.965 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.965 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.965 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.965 11:20:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:21.965 [global] 00:26:21.965 thread=1 00:26:21.965 invalidate=1 00:26:21.965 rw=read 00:26:21.965 time_based=1 00:26:21.965 runtime=10 00:26:21.965 ioengine=libaio 00:26:21.965 direct=1 00:26:21.965 bs=262144 00:26:21.965 iodepth=64 00:26:21.965 norandommap=1 00:26:21.965 numjobs=1 00:26:21.965 00:26:21.965 [job0] 00:26:21.965 filename=/dev/nvme0n1 00:26:21.965 [job1] 00:26:21.965 filename=/dev/nvme10n1 00:26:21.965 [job2] 00:26:21.965 filename=/dev/nvme1n1 00:26:21.965 [job3] 00:26:21.965 filename=/dev/nvme2n1 00:26:21.965 [job4] 00:26:21.965 filename=/dev/nvme3n1 00:26:21.965 [job5] 00:26:21.965 filename=/dev/nvme4n1 00:26:21.965 [job6] 00:26:21.965 filename=/dev/nvme5n1 00:26:21.965 [job7] 00:26:21.965 filename=/dev/nvme6n1 00:26:21.965 [job8] 00:26:21.965 filename=/dev/nvme7n1 00:26:21.965 [job9] 00:26:21.965 filename=/dev/nvme8n1 00:26:21.965 [job10] 00:26:21.965 filename=/dev/nvme9n1 00:26:22.223 Could not set queue depth (nvme0n1) 00:26:22.223 Could not set queue depth (nvme10n1) 00:26:22.223 Could not set queue depth (nvme1n1) 00:26:22.223 Could not set queue depth (nvme2n1) 00:26:22.223 Could not set queue depth (nvme3n1) 00:26:22.223 Could not set queue depth (nvme4n1) 00:26:22.223 Could not set queue depth (nvme5n1) 00:26:22.223 Could not set queue depth (nvme6n1) 00:26:22.223 Could not set queue depth (nvme7n1) 00:26:22.223 Could not set queue depth (nvme8n1) 00:26:22.223 Could not set queue depth (nvme9n1) 00:26:22.223 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:22.223 fio-3.35 00:26:22.223 Starting 11 threads 00:26:34.423 00:26:34.423 job0: (groupid=0, jobs=1): err= 0: pid=300041: Sun Nov 17 11:20:57 2024 00:26:34.423 read: IOPS=138, BW=34.7MiB/s (36.3MB/s)(353MiB/10169msec) 00:26:34.423 slat (usec): min=8, max=429671, avg=5941.88, stdev=27024.90 00:26:34.423 clat (msec): min=5, max=1236, avg=455.27, stdev=327.72 00:26:34.423 lat (msec): min=5, max=1236, avg=461.21, stdev=332.85 00:26:34.423 clat percentiles (msec): 00:26:34.423 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 26], 20.00th=[ 78], 00:26:34.423 | 30.00th=[ 157], 40.00th=[ 355], 50.00th=[ 527], 60.00th=[ 584], 00:26:34.423 | 70.00th=[ 642], 80.00th=[ 718], 90.00th=[ 911], 95.00th=[ 995], 00:26:34.423 | 99.00th=[ 1183], 99.50th=[ 1234], 99.90th=[ 1234], 99.95th=[ 1234], 00:26:34.423 | 99.99th=[ 1234] 00:26:34.423 bw ( KiB/s): min=10240, max=178176, per=4.22%, avg=34477.65, stdev=35422.22, samples=20 00:26:34.423 iops : min= 40, max= 696, avg=134.60, stdev=138.39, samples=20 00:26:34.423 lat (msec) : 10=5.25%, 20=3.97%, 50=6.95%, 100=5.46%, 250=12.91% 00:26:34.423 lat (msec) : 500=11.28%, 750=38.30%, 1000=11.21%, 2000=4.68% 00:26:34.423 cpu : usr=0.12%, sys=0.39%, ctx=273, majf=0, minf=4097 00:26:34.423 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:34.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.423 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.423 issued rwts: total=1410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.423 job1: (groupid=0, jobs=1): err= 0: pid=300048: Sun Nov 17 11:20:57 2024 00:26:34.423 read: IOPS=512, BW=128MiB/s (134MB/s)(1299MiB/10143msec) 00:26:34.423 slat (usec): min=9, max=402411, avg=1580.09, stdev=11533.64 00:26:34.423 clat (usec): min=1329, max=875227, avg=123217.89, stdev=143401.81 00:26:34.423 lat (usec): min=1383, max=1079.1k, avg=124797.98, stdev=145265.90 00:26:34.423 clat percentiles (msec): 00:26:34.423 | 1.00th=[ 23], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 38], 00:26:34.423 | 30.00th=[ 46], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 91], 00:26:34.423 | 70.00th=[ 110], 80.00th=[ 167], 90.00th=[ 234], 95.00th=[ 502], 00:26:34.423 | 99.00th=[ 709], 99.50th=[ 743], 99.90th=[ 760], 99.95th=[ 768], 00:26:34.423 | 99.99th=[ 877] 00:26:34.423 bw ( KiB/s): min=18944, max=370688, per=16.09%, avg=131409.85, stdev=114394.45, samples=20 00:26:34.423 iops : min= 74, max= 1448, avg=513.25, stdev=446.86, samples=20 00:26:34.423 lat (msec) : 2=0.10%, 4=0.21%, 20=0.04%, 50=31.21%, 100=34.92% 00:26:34.423 lat (msec) : 250=24.36%, 500=4.10%, 750=4.85%, 1000=0.21% 00:26:34.423 cpu : usr=0.25%, sys=1.72%, ctx=769, majf=0, minf=3721 00:26:34.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:34.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.423 issued rwts: total=5197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.423 job2: (groupid=0, jobs=1): err= 0: pid=300070: Sun Nov 17 11:20:57 2024 00:26:34.423 read: IOPS=257, BW=64.3MiB/s (67.5MB/s)(648MiB/10077msec) 00:26:34.423 slat (usec): min=8, max=125373, avg=3754.75, stdev=13386.20 00:26:34.423 clat (msec): min=41, max=516, avg=244.79, stdev=111.45 00:26:34.423 lat (msec): min=41, max=516, avg=248.55, stdev=113.08 00:26:34.423 clat percentiles (msec): 00:26:34.423 | 1.00th=[ 58], 5.00th=[ 74], 10.00th=[ 92], 20.00th=[ 114], 00:26:34.423 | 30.00th=[ 167], 40.00th=[ 226], 50.00th=[ 262], 60.00th=[ 288], 00:26:34.423 | 70.00th=[ 313], 80.00th=[ 342], 90.00th=[ 384], 95.00th=[ 426], 00:26:34.423 | 99.00th=[ 477], 99.50th=[ 510], 99.90th=[ 514], 99.95th=[ 518], 00:26:34.423 | 99.99th=[ 518] 00:26:34.423 bw ( KiB/s): min=33792, max=157696, per=7.93%, avg=64772.85, stdev=33738.24, samples=20 00:26:34.423 iops : min= 132, max= 616, avg=252.95, stdev=131.79, samples=20 00:26:34.423 lat (msec) : 50=0.46%, 100=14.15%, 250=31.62%, 500=53.10%, 750=0.66% 00:26:34.423 cpu : usr=0.19%, sys=0.70%, ctx=322, majf=0, minf=4097 00:26:34.423 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:34.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.423 issued rwts: total=2593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.423 job3: (groupid=0, jobs=1): err= 0: pid=300081: Sun Nov 17 11:20:57 2024 00:26:34.423 read: IOPS=628, BW=157MiB/s (165MB/s)(1597MiB/10166msec) 00:26:34.423 slat (usec): min=11, max=386073, avg=1381.11, stdev=7671.90 00:26:34.423 clat (usec): min=1543, max=972689, avg=100417.30, stdev=97576.55 00:26:34.423 lat (usec): min=1608, max=972709, avg=101798.41, stdev=98879.73 00:26:34.423 clat percentiles (msec): 00:26:34.423 | 1.00th=[ 4], 5.00th=[ 23], 10.00th=[ 34], 20.00th=[ 39], 00:26:34.423 | 30.00th=[ 43], 40.00th=[ 58], 50.00th=[ 77], 60.00th=[ 94], 00:26:34.423 | 70.00th=[ 108], 80.00th=[ 138], 90.00th=[ 201], 95.00th=[ 249], 00:26:34.423 | 99.00th=[ 506], 99.50th=[ 743], 99.90th=[ 894], 99.95th=[ 927], 00:26:34.423 | 99.99th=[ 969] 00:26:34.423 bw ( KiB/s): min=25600, max=438272, per=19.81%, avg=161833.30, stdev=105716.42, samples=20 00:26:34.423 iops : min= 100, max= 1712, avg=632.05, stdev=412.87, samples=20 00:26:34.423 lat (msec) : 2=0.06%, 4=1.25%, 10=1.52%, 20=1.58%, 50=31.68% 00:26:34.423 lat (msec) : 100=28.89%, 250=30.11%, 500=3.77%, 750=0.81%, 1000=0.31% 00:26:34.423 cpu : usr=0.25%, sys=2.13%, ctx=1209, majf=0, minf=4097 00:26:34.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:34.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.423 issued rwts: total=6386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.423 job4: (groupid=0, jobs=1): err= 0: pid=300085: Sun Nov 17 11:20:57 2024 00:26:34.423 read: IOPS=210, BW=52.5MiB/s (55.1MB/s)(529MiB/10071msec) 00:26:34.423 slat (usec): min=8, max=333741, avg=3528.56, stdev=19644.08 00:26:34.423 clat (usec): min=943, max=983771, avg=300873.40, stdev=274026.66 00:26:34.423 lat (usec): min=957, max=983823, avg=304401.96, stdev=277730.94 00:26:34.423 clat percentiles (usec): 00:26:34.423 | 1.00th=[ 1205], 5.00th=[ 2008], 10.00th=[ 7046], 20.00th=[ 42206], 00:26:34.423 | 30.00th=[ 60556], 40.00th=[ 90702], 50.00th=[168821], 60.00th=[442500], 00:26:34.423 | 70.00th=[530580], 80.00th=[574620], 90.00th=[658506], 95.00th=[734004], 00:26:34.423 | 99.00th=[884999], 99.50th=[943719], 99.90th=[952108], 99.95th=[952108], 00:26:34.423 | 99.99th=[985662] 00:26:34.423 bw ( KiB/s): min=17408, max=194560, per=6.43%, avg=52518.35, stdev=44747.01, samples=20 00:26:34.423 iops : min= 68, max= 760, avg=205.10, stdev=174.81, samples=20 00:26:34.423 lat (usec) : 1000=0.09% 00:26:34.423 lat (msec) : 2=4.87%, 4=3.17%, 10=7.37%, 20=3.69%, 50=5.43% 00:26:34.423 lat (msec) : 100=17.96%, 250=11.58%, 500=10.21%, 750=30.91%, 1000=4.73% 00:26:34.423 cpu : usr=0.07%, sys=0.58%, ctx=584, majf=0, minf=4097 00:26:34.423 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 job5: (groupid=0, jobs=1): err= 0: pid=300134: Sun Nov 17 11:20:57 2024 00:26:34.424 read: IOPS=221, BW=55.5MiB/s (58.2MB/s)(564MiB/10171msec) 00:26:34.424 slat (usec): min=8, max=463836, avg=3531.27, stdev=19353.43 00:26:34.424 clat (msec): min=2, max=941, avg=284.66, stdev=220.50 00:26:34.424 lat (msec): min=2, max=941, avg=288.19, stdev=223.07 00:26:34.424 clat percentiles (msec): 00:26:34.424 | 1.00th=[ 6], 5.00th=[ 64], 10.00th=[ 88], 20.00th=[ 124], 00:26:34.424 | 30.00th=[ 138], 40.00th=[ 163], 50.00th=[ 203], 60.00th=[ 241], 00:26:34.424 | 70.00th=[ 275], 80.00th=[ 535], 90.00th=[ 642], 95.00th=[ 726], 00:26:34.424 | 99.00th=[ 885], 99.50th=[ 944], 99.90th=[ 944], 99.95th=[ 944], 00:26:34.424 | 99.99th=[ 944] 00:26:34.424 bw ( KiB/s): min=11264, max=158208, per=6.87%, avg=56126.30, stdev=37803.68, samples=20 00:26:34.424 iops : min= 44, max= 618, avg=219.20, stdev=147.67, samples=20 00:26:34.424 lat (msec) : 4=0.31%, 10=0.80%, 20=0.31%, 50=1.33%, 100=11.52% 00:26:34.424 lat (msec) : 250=48.29%, 500=14.18%, 750=19.36%, 1000=3.90% 00:26:34.424 cpu : usr=0.13%, sys=0.57%, ctx=459, majf=0, minf=4097 00:26:34.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=2257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 job6: (groupid=0, jobs=1): err= 0: pid=300152: Sun Nov 17 11:20:57 2024 00:26:34.424 read: IOPS=329, BW=82.3MiB/s (86.3MB/s)(825MiB/10025msec) 00:26:34.424 slat (usec): min=13, max=115957, avg=2853.28, stdev=11226.39 00:26:34.424 clat (usec): min=1792, max=522177, avg=191501.79, stdev=135219.13 00:26:34.424 lat (usec): min=1860, max=522202, avg=194355.07, stdev=137335.53 00:26:34.424 clat percentiles (msec): 00:26:34.424 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 45], 00:26:34.424 | 30.00th=[ 56], 40.00th=[ 107], 50.00th=[ 228], 60.00th=[ 264], 00:26:34.424 | 70.00th=[ 292], 80.00th=[ 317], 90.00th=[ 359], 95.00th=[ 397], 00:26:34.424 | 99.00th=[ 460], 99.50th=[ 477], 99.90th=[ 489], 99.95th=[ 518], 00:26:34.424 | 99.99th=[ 523] 00:26:34.424 bw ( KiB/s): min=34816, max=362496, per=10.14%, avg=82831.35, stdev=85590.12, samples=20 00:26:34.424 iops : min= 136, max= 1416, avg=323.50, stdev=334.36, samples=20 00:26:34.424 lat (msec) : 2=0.06%, 4=0.52%, 10=2.64%, 20=5.64%, 50=18.22% 00:26:34.424 lat (msec) : 100=12.40%, 250=15.85%, 500=44.62%, 750=0.06% 00:26:34.424 cpu : usr=0.16%, sys=1.11%, ctx=564, majf=0, minf=4097 00:26:34.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=3299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 job7: (groupid=0, jobs=1): err= 0: pid=300175: Sun Nov 17 11:20:57 2024 00:26:34.424 read: IOPS=207, BW=51.8MiB/s (54.4MB/s)(523MiB/10078msec) 00:26:34.424 slat (usec): min=8, max=406557, avg=3030.73, stdev=21224.36 00:26:34.424 clat (usec): min=1767, max=1127.4k, avg=305367.42, stdev=282648.64 00:26:34.424 lat (usec): min=1793, max=1127.4k, avg=308398.14, stdev=285698.69 00:26:34.424 clat percentiles (msec): 00:26:34.424 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 33], 00:26:34.424 | 30.00th=[ 47], 40.00th=[ 106], 50.00th=[ 190], 60.00th=[ 351], 00:26:34.424 | 70.00th=[ 523], 80.00th=[ 625], 90.00th=[ 709], 95.00th=[ 760], 00:26:34.424 | 99.00th=[ 902], 99.50th=[ 978], 99.90th=[ 1028], 99.95th=[ 1028], 00:26:34.424 | 99.99th=[ 1133] 00:26:34.424 bw ( KiB/s): min=17408, max=264192, per=6.35%, avg=51885.10, stdev=58291.79, samples=20 00:26:34.424 iops : min= 68, max= 1032, avg=202.60, stdev=227.73, samples=20 00:26:34.424 lat (msec) : 2=0.14%, 4=0.91%, 10=2.25%, 20=9.38%, 50=18.71% 00:26:34.424 lat (msec) : 100=7.75%, 250=14.45%, 500=12.25%, 750=29.04%, 1000=4.64% 00:26:34.424 lat (msec) : 2000=0.48% 00:26:34.424 cpu : usr=0.09%, sys=0.48%, ctx=538, majf=0, minf=4098 00:26:34.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 job8: (groupid=0, jobs=1): err= 0: pid=300207: Sun Nov 17 11:20:57 2024 00:26:34.424 read: IOPS=285, BW=71.4MiB/s (74.9MB/s)(720MiB/10079msec) 00:26:34.424 slat (usec): min=12, max=116310, avg=3476.13, stdev=11559.74 00:26:34.424 clat (msec): min=26, max=473, avg=220.41, stdev=92.42 00:26:34.424 lat (msec): min=26, max=473, avg=223.88, stdev=93.84 00:26:34.424 clat percentiles (msec): 00:26:34.424 | 1.00th=[ 40], 5.00th=[ 79], 10.00th=[ 87], 20.00th=[ 106], 00:26:34.424 | 30.00th=[ 174], 40.00th=[ 209], 50.00th=[ 239], 60.00th=[ 262], 00:26:34.424 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 330], 95.00th=[ 351], 00:26:34.424 | 99.00th=[ 409], 99.50th=[ 435], 99.90th=[ 472], 99.95th=[ 472], 00:26:34.424 | 99.99th=[ 472] 00:26:34.424 bw ( KiB/s): min=36864, max=169811, per=8.82%, avg=72069.65, stdev=32221.84, samples=20 00:26:34.424 iops : min= 144, max= 663, avg=281.45, stdev=125.85, samples=20 00:26:34.424 lat (msec) : 50=1.18%, 100=14.80%, 250=39.01%, 500=45.02% 00:26:34.424 cpu : usr=0.18%, sys=0.91%, ctx=371, majf=0, minf=4097 00:26:34.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=2879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 job9: (groupid=0, jobs=1): err= 0: pid=300208: Sun Nov 17 11:20:57 2024 00:26:34.424 read: IOPS=252, BW=63.2MiB/s (66.2MB/s)(642MiB/10170msec) 00:26:34.424 slat (usec): min=8, max=362944, avg=2115.16, stdev=13386.72 00:26:34.424 clat (msec): min=2, max=1070, avg=251.06, stdev=192.25 00:26:34.424 lat (msec): min=2, max=1070, avg=253.18, stdev=193.37 00:26:34.424 clat percentiles (msec): 00:26:34.424 | 1.00th=[ 7], 5.00th=[ 30], 10.00th=[ 62], 20.00th=[ 87], 00:26:34.424 | 30.00th=[ 155], 40.00th=[ 188], 50.00th=[ 224], 60.00th=[ 253], 00:26:34.424 | 70.00th=[ 288], 80.00th=[ 334], 90.00th=[ 447], 95.00th=[ 684], 00:26:34.424 | 99.00th=[ 894], 99.50th=[ 1053], 99.90th=[ 1062], 99.95th=[ 1062], 00:26:34.424 | 99.99th=[ 1070] 00:26:34.424 bw ( KiB/s): min=12288, max=133120, per=7.85%, avg=64118.55, stdev=34181.02, samples=20 00:26:34.424 iops : min= 48, max= 520, avg=250.40, stdev=133.53, samples=20 00:26:34.424 lat (msec) : 4=0.08%, 10=2.14%, 20=1.60%, 50=3.62%, 100=13.55% 00:26:34.424 lat (msec) : 250=37.95%, 500=32.07%, 750=4.52%, 1000=3.62%, 2000=0.86% 00:26:34.424 cpu : usr=0.11%, sys=0.69%, ctx=742, majf=0, minf=4097 00:26:34.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=2569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 job10: (groupid=0, jobs=1): err= 0: pid=300209: Sun Nov 17 11:20:57 2024 00:26:34.424 read: IOPS=163, BW=40.9MiB/s (42.9MB/s)(416MiB/10172msec) 00:26:34.424 slat (usec): min=8, max=291927, avg=4386.26, stdev=22075.28 00:26:34.424 clat (msec): min=3, max=975, avg=386.78, stdev=265.83 00:26:34.424 lat (msec): min=3, max=975, avg=391.16, stdev=269.29 00:26:34.424 clat percentiles (msec): 00:26:34.424 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 26], 00:26:34.424 | 30.00th=[ 159], 40.00th=[ 414], 50.00th=[ 485], 60.00th=[ 531], 00:26:34.424 | 70.00th=[ 567], 80.00th=[ 609], 90.00th=[ 676], 95.00th=[ 726], 00:26:34.424 | 99.00th=[ 894], 99.50th=[ 936], 99.90th=[ 978], 99.95th=[ 978], 00:26:34.424 | 99.99th=[ 978] 00:26:34.424 bw ( KiB/s): min=17884, max=187904, per=5.01%, avg=40926.90, stdev=37445.43, samples=20 00:26:34.424 iops : min= 69, max= 734, avg=159.80, stdev=146.29, samples=20 00:26:34.424 lat (msec) : 4=0.06%, 10=8.72%, 20=9.62%, 50=8.12%, 100=3.13% 00:26:34.424 lat (msec) : 250=1.92%, 500=21.89%, 750=42.27%, 1000=4.27% 00:26:34.424 cpu : usr=0.10%, sys=0.50%, ctx=422, majf=0, minf=4098 00:26:34.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:26:34.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:34.424 issued rwts: total=1663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:34.424 00:26:34.424 Run status group 0 (all jobs): 00:26:34.424 READ: bw=798MiB/s (837MB/s), 34.7MiB/s-157MiB/s (36.3MB/s-165MB/s), io=8115MiB (8509MB), run=10025-10172msec 00:26:34.424 00:26:34.424 Disk stats (read/write): 00:26:34.424 nvme0n1: ios=2672/0, merge=0/0, ticks=1206849/0, in_queue=1206849, util=97.05% 00:26:34.424 nvme10n1: ios=10221/0, merge=0/0, ticks=1201691/0, in_queue=1201691, util=97.26% 00:26:34.424 nvme1n1: ios=5023/0, merge=0/0, ticks=1234545/0, in_queue=1234545, util=97.53% 00:26:34.424 nvme2n1: ios=12645/0, merge=0/0, ticks=1213455/0, in_queue=1213455, util=97.70% 00:26:34.424 nvme3n1: ios=3946/0, merge=0/0, ticks=1237403/0, in_queue=1237403, util=97.79% 00:26:34.424 nvme4n1: ios=4513/0, merge=0/0, ticks=1272837/0, in_queue=1272837, util=98.22% 00:26:34.424 nvme5n1: ios=6268/0, merge=0/0, ticks=1241614/0, in_queue=1241614, util=98.34% 00:26:34.424 nvme6n1: ios=3979/0, merge=0/0, ticks=1237641/0, in_queue=1237641, util=98.48% 00:26:34.424 nvme7n1: ios=5556/0, merge=0/0, ticks=1238503/0, in_queue=1238503, util=98.92% 00:26:34.424 nvme8n1: ios=5134/0, merge=0/0, ticks=1277865/0, in_queue=1277865, util=99.14% 00:26:34.424 nvme9n1: ios=3283/0, merge=0/0, ticks=1261343/0, in_queue=1261343, util=99.27% 00:26:34.424 11:20:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:34.424 [global] 00:26:34.424 thread=1 00:26:34.424 invalidate=1 00:26:34.424 rw=randwrite 00:26:34.424 time_based=1 00:26:34.425 runtime=10 00:26:34.425 ioengine=libaio 00:26:34.425 direct=1 00:26:34.425 bs=262144 00:26:34.425 iodepth=64 00:26:34.425 norandommap=1 00:26:34.425 numjobs=1 00:26:34.425 00:26:34.425 [job0] 00:26:34.425 filename=/dev/nvme0n1 00:26:34.425 [job1] 00:26:34.425 filename=/dev/nvme10n1 00:26:34.425 [job2] 00:26:34.425 filename=/dev/nvme1n1 00:26:34.425 [job3] 00:26:34.425 filename=/dev/nvme2n1 00:26:34.425 [job4] 00:26:34.425 filename=/dev/nvme3n1 00:26:34.425 [job5] 00:26:34.425 filename=/dev/nvme4n1 00:26:34.425 [job6] 00:26:34.425 filename=/dev/nvme5n1 00:26:34.425 [job7] 00:26:34.425 filename=/dev/nvme6n1 00:26:34.425 [job8] 00:26:34.425 filename=/dev/nvme7n1 00:26:34.425 [job9] 00:26:34.425 filename=/dev/nvme8n1 00:26:34.425 [job10] 00:26:34.425 filename=/dev/nvme9n1 00:26:34.425 Could not set queue depth (nvme0n1) 00:26:34.425 Could not set queue depth (nvme10n1) 00:26:34.425 Could not set queue depth (nvme1n1) 00:26:34.425 Could not set queue depth (nvme2n1) 00:26:34.425 Could not set queue depth (nvme3n1) 00:26:34.425 Could not set queue depth (nvme4n1) 00:26:34.425 Could not set queue depth (nvme5n1) 00:26:34.425 Could not set queue depth (nvme6n1) 00:26:34.425 Could not set queue depth (nvme7n1) 00:26:34.425 Could not set queue depth (nvme8n1) 00:26:34.425 Could not set queue depth (nvme9n1) 00:26:34.425 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:34.425 fio-3.35 00:26:34.425 Starting 11 threads 00:26:44.397 00:26:44.397 job0: (groupid=0, jobs=1): err= 0: pid=300929: Sun Nov 17 11:21:08 2024 00:26:44.397 write: IOPS=284, BW=71.2MiB/s (74.6MB/s)(726MiB/10191msec); 0 zone resets 00:26:44.397 slat (usec): min=24, max=84503, avg=2570.97, stdev=7159.03 00:26:44.397 clat (msec): min=3, max=831, avg=222.06, stdev=158.72 00:26:44.397 lat (msec): min=3, max=843, avg=224.63, stdev=160.81 00:26:44.397 clat percentiles (msec): 00:26:44.397 | 1.00th=[ 16], 5.00th=[ 41], 10.00th=[ 50], 20.00th=[ 84], 00:26:44.397 | 30.00th=[ 117], 40.00th=[ 153], 50.00th=[ 190], 60.00th=[ 230], 00:26:44.397 | 70.00th=[ 266], 80.00th=[ 338], 90.00th=[ 472], 95.00th=[ 558], 00:26:44.397 | 99.00th=[ 676], 99.50th=[ 793], 99.90th=[ 827], 99.95th=[ 835], 00:26:44.397 | 99.99th=[ 835] 00:26:44.397 bw ( KiB/s): min=28672, max=211456, per=7.08%, avg=72664.90, stdev=42782.61, samples=20 00:26:44.397 iops : min= 112, max= 826, avg=283.80, stdev=167.05, samples=20 00:26:44.397 lat (msec) : 4=0.03%, 10=0.21%, 20=1.52%, 50=8.48%, 100=15.95% 00:26:44.397 lat (msec) : 250=38.25%, 500=27.84%, 750=7.00%, 1000=0.72% 00:26:44.397 cpu : usr=0.79%, sys=1.04%, ctx=1383, majf=0, minf=1 00:26:44.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:44.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.397 issued rwts: total=0,2902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.397 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.397 job1: (groupid=0, jobs=1): err= 0: pid=300942: Sun Nov 17 11:21:08 2024 00:26:44.397 write: IOPS=439, BW=110MiB/s (115MB/s)(1111MiB/10109msec); 0 zone resets 00:26:44.397 slat (usec): min=17, max=147208, avg=1281.39, stdev=5145.58 00:26:44.397 clat (usec): min=784, max=666241, avg=144204.94, stdev=134188.78 00:26:44.397 lat (usec): min=825, max=666286, avg=145486.33, stdev=135567.30 00:26:44.397 clat percentiles (msec): 00:26:44.397 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 31], 00:26:44.397 | 30.00th=[ 46], 40.00th=[ 66], 50.00th=[ 114], 60.00th=[ 150], 00:26:44.397 | 70.00th=[ 194], 80.00th=[ 239], 90.00th=[ 284], 95.00th=[ 460], 00:26:44.397 | 99.00th=[ 567], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 659], 00:26:44.397 | 99.99th=[ 667] 00:26:44.397 bw ( KiB/s): min=32768, max=232448, per=10.93%, avg=112143.50, stdev=59262.52, samples=20 00:26:44.397 iops : min= 128, max= 908, avg=438.05, stdev=231.50, samples=20 00:26:44.397 lat (usec) : 1000=0.05% 00:26:44.397 lat (msec) : 2=0.38%, 4=2.63%, 10=4.52%, 20=6.55%, 50=18.05% 00:26:44.397 lat (msec) : 100=15.12%, 250=34.29%, 500=14.83%, 750=3.58% 00:26:44.397 cpu : usr=1.29%, sys=1.71%, ctx=3227, majf=0, minf=1 00:26:44.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:44.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.397 issued rwts: total=0,4444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.397 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.397 job2: (groupid=0, jobs=1): err= 0: pid=300943: Sun Nov 17 11:21:08 2024 00:26:44.397 write: IOPS=324, BW=81.1MiB/s (85.1MB/s)(827MiB/10188msec); 0 zone resets 00:26:44.397 slat (usec): min=19, max=225574, avg=1957.88, stdev=8750.08 00:26:44.397 clat (usec): min=856, max=1074.0k, avg=195172.07, stdev=212411.97 00:26:44.397 lat (usec): min=919, max=1074.1k, avg=197129.95, stdev=214473.72 00:26:44.397 clat percentiles (usec): 00:26:44.397 | 1.00th=[ 1713], 5.00th=[ 6652], 10.00th=[ 12911], 00:26:44.397 | 20.00th=[ 24511], 30.00th=[ 41157], 40.00th=[ 50070], 00:26:44.397 | 50.00th=[ 78119], 60.00th=[ 166724], 70.00th=[ 287310], 00:26:44.397 | 80.00th=[ 379585], 90.00th=[ 566232], 95.00th=[ 624952], 00:26:44.397 | 99.00th=[ 792724], 99.50th=[ 834667], 99.90th=[ 859833], 00:26:44.397 | 99.95th=[ 868221], 99.99th=[1082131] 00:26:44.397 bw ( KiB/s): min=24576, max=239616, per=8.08%, avg=82971.65, stdev=70541.14, samples=20 00:26:44.397 iops : min= 96, max= 936, avg=324.10, stdev=275.53, samples=20 00:26:44.397 lat (usec) : 1000=0.15% 00:26:44.397 lat (msec) : 2=1.00%, 4=1.66%, 10=4.60%, 20=8.80%, 50=23.84% 00:26:44.397 lat (msec) : 100=12.79%, 250=15.82%, 500=18.33%, 750=11.43%, 1000=1.54% 00:26:44.397 lat (msec) : 2000=0.03% 00:26:44.397 cpu : usr=0.99%, sys=1.18%, ctx=2192, majf=0, minf=1 00:26:44.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:44.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.397 issued rwts: total=0,3306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.397 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.397 job3: (groupid=0, jobs=1): err= 0: pid=300944: Sun Nov 17 11:21:08 2024 00:26:44.397 write: IOPS=294, BW=73.7MiB/s (77.2MB/s)(751MiB/10188msec); 0 zone resets 00:26:44.397 slat (usec): min=18, max=225887, avg=2154.76, stdev=7994.20 00:26:44.397 clat (msec): min=3, max=753, avg=214.94, stdev=165.17 00:26:44.397 lat (msec): min=3, max=753, avg=217.09, stdev=166.76 00:26:44.397 clat percentiles (msec): 00:26:44.397 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 62], 00:26:44.397 | 30.00th=[ 126], 40.00th=[ 153], 50.00th=[ 190], 60.00th=[ 220], 00:26:44.397 | 70.00th=[ 259], 80.00th=[ 305], 90.00th=[ 485], 95.00th=[ 584], 00:26:44.397 | 99.00th=[ 667], 99.50th=[ 693], 99.90th=[ 735], 99.95th=[ 743], 00:26:44.397 | 99.99th=[ 751] 00:26:44.397 bw ( KiB/s): min=22528, max=164864, per=7.33%, avg=75225.95, stdev=36453.72, samples=20 00:26:44.397 iops : min= 88, max= 644, avg=293.85, stdev=142.40, samples=20 00:26:44.397 lat (msec) : 4=0.07%, 10=1.77%, 20=4.00%, 50=11.63%, 100=9.79% 00:26:44.398 lat (msec) : 250=39.71%, 500=24.35%, 750=8.66%, 1000=0.03% 00:26:44.398 cpu : usr=0.92%, sys=0.98%, ctx=1879, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,3002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job4: (groupid=0, jobs=1): err= 0: pid=300945: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=270, BW=67.6MiB/s (70.9MB/s)(688MiB/10177msec); 0 zone resets 00:26:44.398 slat (usec): min=15, max=105440, avg=2588.63, stdev=7822.61 00:26:44.398 clat (usec): min=729, max=758872, avg=233989.19, stdev=179004.32 00:26:44.398 lat (usec): min=755, max=771509, avg=236577.83, stdev=181293.43 00:26:44.398 clat percentiles (msec): 00:26:44.398 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 20], 20.00th=[ 74], 00:26:44.398 | 30.00th=[ 134], 40.00th=[ 174], 50.00th=[ 211], 60.00th=[ 243], 00:26:44.398 | 70.00th=[ 268], 80.00th=[ 334], 90.00th=[ 542], 95.00th=[ 625], 00:26:44.398 | 99.00th=[ 718], 99.50th=[ 726], 99.90th=[ 751], 99.95th=[ 751], 00:26:44.398 | 99.99th=[ 760] 00:26:44.398 bw ( KiB/s): min=22528, max=157892, per=6.70%, avg=68797.00, stdev=37040.91, samples=20 00:26:44.398 iops : min= 88, max= 616, avg=268.70, stdev=144.59, samples=20 00:26:44.398 lat (usec) : 750=0.07%, 1000=0.15% 00:26:44.398 lat (msec) : 2=0.40%, 4=2.69%, 10=2.84%, 20=4.22%, 50=6.07% 00:26:44.398 lat (msec) : 100=7.82%, 250=37.77%, 500=26.61%, 750=11.27%, 1000=0.11% 00:26:44.398 cpu : usr=0.68%, sys=0.83%, ctx=1584, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,2751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job5: (groupid=0, jobs=1): err= 0: pid=300946: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=203, BW=50.8MiB/s (53.3MB/s)(518MiB/10187msec); 0 zone resets 00:26:44.398 slat (usec): min=15, max=75935, avg=3728.53, stdev=9955.63 00:26:44.398 clat (msec): min=2, max=777, avg=310.20, stdev=198.42 00:26:44.398 lat (msec): min=2, max=777, avg=313.93, stdev=201.28 00:26:44.398 clat percentiles (msec): 00:26:44.398 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 45], 20.00th=[ 96], 00:26:44.398 | 30.00th=[ 157], 40.00th=[ 259], 50.00th=[ 317], 60.00th=[ 355], 00:26:44.398 | 70.00th=[ 447], 80.00th=[ 502], 90.00th=[ 558], 95.00th=[ 651], 00:26:44.398 | 99.00th=[ 743], 99.50th=[ 768], 99.90th=[ 776], 99.95th=[ 776], 00:26:44.398 | 99.99th=[ 776] 00:26:44.398 bw ( KiB/s): min=20480, max=134144, per=5.01%, avg=51376.50, stdev=25937.52, samples=20 00:26:44.398 iops : min= 80, max= 524, avg=200.65, stdev=101.36, samples=20 00:26:44.398 lat (msec) : 4=0.24%, 10=0.72%, 20=1.59%, 50=8.55%, 100=10.10% 00:26:44.398 lat (msec) : 250=17.97%, 500=41.21%, 750=18.74%, 1000=0.87% 00:26:44.398 cpu : usr=0.79%, sys=0.62%, ctx=1137, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,2070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job6: (groupid=0, jobs=1): err= 0: pid=300947: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=414, BW=104MiB/s (109MB/s)(1052MiB/10161msec); 0 zone resets 00:26:44.398 slat (usec): min=12, max=214540, avg=1714.99, stdev=6677.28 00:26:44.398 clat (usec): min=642, max=933696, avg=152707.55, stdev=167299.64 00:26:44.398 lat (usec): min=688, max=933754, avg=154422.54, stdev=168673.93 00:26:44.398 clat percentiles (usec): 00:26:44.398 | 1.00th=[ 1483], 5.00th=[ 9110], 10.00th=[ 18220], 20.00th=[ 35390], 00:26:44.398 | 30.00th=[ 56886], 40.00th=[ 61604], 50.00th=[106431], 60.00th=[135267], 00:26:44.398 | 70.00th=[152044], 80.00th=[191890], 90.00th=[429917], 95.00th=[574620], 00:26:44.398 | 99.00th=[700449], 99.50th=[809501], 99.90th=[901776], 99.95th=[935330], 00:26:44.398 | 99.99th=[935330] 00:26:44.398 bw ( KiB/s): min=28672, max=269312, per=10.34%, avg=106129.60, stdev=74704.40, samples=20 00:26:44.398 iops : min= 112, max= 1052, avg=414.55, stdev=291.82, samples=20 00:26:44.398 lat (usec) : 750=0.14%, 1000=0.38% 00:26:44.398 lat (msec) : 2=1.05%, 4=0.95%, 10=2.83%, 20=5.42%, 50=13.28% 00:26:44.398 lat (msec) : 100=23.62%, 250=36.09%, 500=8.29%, 750=7.25%, 1000=0.71% 00:26:44.398 cpu : usr=1.09%, sys=1.03%, ctx=2292, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,4209,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job7: (groupid=0, jobs=1): err= 0: pid=300948: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=247, BW=61.8MiB/s (64.8MB/s)(629MiB/10182msec); 0 zone resets 00:26:44.398 slat (usec): min=15, max=164452, avg=2713.02, stdev=9290.11 00:26:44.398 clat (usec): min=1386, max=834574, avg=256192.97, stdev=225320.27 00:26:44.398 lat (usec): min=1431, max=839697, avg=258905.99, stdev=227845.27 00:26:44.398 clat percentiles (msec): 00:26:44.398 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 31], 00:26:44.398 | 30.00th=[ 91], 40.00th=[ 126], 50.00th=[ 184], 60.00th=[ 300], 00:26:44.398 | 70.00th=[ 376], 80.00th=[ 489], 90.00th=[ 575], 95.00th=[ 693], 00:26:44.398 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 827], 99.95th=[ 835], 00:26:44.398 | 99.99th=[ 835] 00:26:44.398 bw ( KiB/s): min=16384, max=199680, per=6.11%, avg=62763.65, stdev=43934.52, samples=20 00:26:44.398 iops : min= 64, max= 780, avg=245.15, stdev=171.61, samples=20 00:26:44.398 lat (msec) : 2=0.16%, 4=3.78%, 10=9.94%, 20=3.22%, 50=7.95% 00:26:44.398 lat (msec) : 100=6.24%, 250=25.09%, 500=25.09%, 750=15.27%, 1000=3.26% 00:26:44.398 cpu : usr=0.57%, sys=0.74%, ctx=1583, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,2515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job8: (groupid=0, jobs=1): err= 0: pid=300953: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=386, BW=96.6MiB/s (101MB/s)(984MiB/10189msec); 0 zone resets 00:26:44.398 slat (usec): min=15, max=146804, avg=1395.00, stdev=6061.22 00:26:44.398 clat (usec): min=975, max=677833, avg=164108.00, stdev=154083.47 00:26:44.398 lat (usec): min=1004, max=677865, avg=165503.00, stdev=155079.93 00:26:44.398 clat percentiles (msec): 00:26:44.398 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 35], 00:26:44.398 | 30.00th=[ 56], 40.00th=[ 91], 50.00th=[ 125], 60.00th=[ 153], 00:26:44.398 | 70.00th=[ 199], 80.00th=[ 253], 90.00th=[ 435], 95.00th=[ 506], 00:26:44.398 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 625], 99.95th=[ 667], 00:26:44.398 | 99.99th=[ 676] 00:26:44.398 bw ( KiB/s): min=26624, max=206848, per=9.66%, avg=99139.75, stdev=50653.62, samples=20 00:26:44.398 iops : min= 104, max= 808, avg=387.25, stdev=197.87, samples=20 00:26:44.398 lat (usec) : 1000=0.03% 00:26:44.398 lat (msec) : 2=0.79%, 4=2.21%, 10=4.80%, 20=6.43%, 50=12.32% 00:26:44.398 lat (msec) : 100=15.62%, 250=37.62%, 500=14.73%, 750=5.46% 00:26:44.398 cpu : usr=1.27%, sys=1.27%, ctx=2666, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,3937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job9: (groupid=0, jobs=1): err= 0: pid=300954: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=371, BW=92.8MiB/s (97.4MB/s)(945MiB/10175msec); 0 zone resets 00:26:44.398 slat (usec): min=18, max=104759, avg=1707.06, stdev=6339.93 00:26:44.398 clat (usec): min=708, max=743922, avg=170483.53, stdev=172559.33 00:26:44.398 lat (usec): min=737, max=744000, avg=172190.59, stdev=174368.53 00:26:44.398 clat percentiles (usec): 00:26:44.398 | 1.00th=[ 1680], 5.00th=[ 6128], 10.00th=[ 10421], 20.00th=[ 33817], 00:26:44.398 | 30.00th=[ 53216], 40.00th=[ 64226], 50.00th=[ 92799], 60.00th=[149947], 00:26:44.398 | 70.00th=[212861], 80.00th=[316670], 90.00th=[467665], 95.00th=[549454], 00:26:44.398 | 99.00th=[633340], 99.50th=[658506], 99.90th=[725615], 99.95th=[734004], 00:26:44.398 | 99.99th=[742392] 00:26:44.398 bw ( KiB/s): min=26624, max=296448, per=9.27%, avg=95125.45, stdev=74254.33, samples=20 00:26:44.398 iops : min= 104, max= 1158, avg=371.55, stdev=290.08, samples=20 00:26:44.398 lat (usec) : 750=0.11%, 1000=0.08% 00:26:44.398 lat (msec) : 2=1.27%, 4=1.80%, 10=6.35%, 20=4.82%, 50=13.36% 00:26:44.398 lat (msec) : 100=24.42%, 250=20.32%, 500=18.95%, 750=8.52% 00:26:44.398 cpu : usr=1.04%, sys=1.22%, ctx=2416, majf=0, minf=1 00:26:44.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:44.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.398 issued rwts: total=0,3779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.398 job10: (groupid=0, jobs=1): err= 0: pid=300962: Sun Nov 17 11:21:08 2024 00:26:44.398 write: IOPS=785, BW=196MiB/s (206MB/s)(1987MiB/10110msec); 0 zone resets 00:26:44.398 slat (usec): min=17, max=91008, avg=884.64, stdev=3310.98 00:26:44.398 clat (usec): min=790, max=679482, avg=80488.81, stdev=92075.61 00:26:44.398 lat (usec): min=820, max=718133, avg=81373.45, stdev=92990.87 00:26:44.398 clat percentiles (msec): 00:26:44.398 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 21], 20.00th=[ 39], 00:26:44.398 | 30.00th=[ 43], 40.00th=[ 52], 50.00th=[ 55], 60.00th=[ 57], 00:26:44.398 | 70.00th=[ 61], 80.00th=[ 101], 90.00th=[ 180], 95.00th=[ 253], 00:26:44.398 | 99.00th=[ 584], 99.50th=[ 617], 99.90th=[ 651], 99.95th=[ 676], 00:26:44.398 | 99.99th=[ 684] 00:26:44.399 bw ( KiB/s): min=51200, max=385536, per=19.66%, avg=201799.50, stdev=108898.01, samples=20 00:26:44.399 iops : min= 200, max= 1506, avg=788.25, stdev=425.43, samples=20 00:26:44.399 lat (usec) : 1000=0.06% 00:26:44.399 lat (msec) : 2=0.26%, 4=0.92%, 10=3.90%, 20=4.88%, 50=26.29% 00:26:44.399 lat (msec) : 100=43.63%, 250=14.91%, 500=3.66%, 750=1.47% 00:26:44.399 cpu : usr=2.39%, sys=2.54%, ctx=4016, majf=0, minf=1 00:26:44.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:44.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:44.399 issued rwts: total=0,7946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:44.399 00:26:44.399 Run status group 0 (all jobs): 00:26:44.399 WRITE: bw=1002MiB/s (1051MB/s), 50.8MiB/s-196MiB/s (53.3MB/s-206MB/s), io=9.98GiB (10.7GB), run=10109-10191msec 00:26:44.399 00:26:44.399 Disk stats (read/write): 00:26:44.399 nvme0n1: ios=49/5674, merge=0/0, ticks=28/1212550, in_queue=1212578, util=95.45% 00:26:44.399 nvme10n1: ios=37/8756, merge=0/0, ticks=1577/1225555, in_queue=1227132, util=99.92% 00:26:44.399 nvme1n1: ios=0/6475, merge=0/0, ticks=0/1215453, in_queue=1215453, util=96.09% 00:26:44.399 nvme2n1: ios=0/5875, merge=0/0, ticks=0/1220701, in_queue=1220701, util=96.42% 00:26:44.399 nvme3n1: ios=44/5355, merge=0/0, ticks=1610/1213675, in_queue=1215285, util=100.00% 00:26:44.399 nvme4n1: ios=43/4004, merge=0/0, ticks=1340/1206072, in_queue=1207412, util=100.00% 00:26:44.399 nvme5n1: ios=28/8247, merge=0/0, ticks=741/1216128, in_queue=1216869, util=99.97% 00:26:44.399 nvme6n1: ios=41/4895, merge=0/0, ticks=1160/1216188, in_queue=1217348, util=100.00% 00:26:44.399 nvme7n1: ios=37/7741, merge=0/0, ticks=776/1216360, in_queue=1217136, util=99.84% 00:26:44.399 nvme8n1: ios=35/7415, merge=0/0, ticks=799/1222597, in_queue=1223396, util=99.84% 00:26:44.399 nvme9n1: ios=36/15756, merge=0/0, ticks=1502/1222181, in_queue=1223683, util=99.84% 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:44.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.399 11:21:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:44.669 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.669 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:44.927 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:44.927 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:44.927 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.927 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.927 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:45.185 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.185 11:21:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:45.443 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.443 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:46.008 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:46.008 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.008 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:46.266 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:46.266 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:46.266 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.266 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.266 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:46.267 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.267 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.525 11:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:46.525 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:46.525 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.525 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.525 rmmod nvme_tcp 00:26:46.783 rmmod nvme_fabrics 00:26:46.783 rmmod nvme_keyring 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 295927 ']' 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 295927 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 295927 ']' 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 295927 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295927 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295927' 00:26:46.783 killing process with pid 295927 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 295927 00:26:46.783 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 295927 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.350 11:21:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:49.257 00:26:49.257 real 1m1.102s 00:26:49.257 user 3m34.569s 00:26:49.257 sys 0m17.129s 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:49.257 ************************************ 00:26:49.257 END TEST nvmf_multiconnection 00:26:49.257 ************************************ 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.257 11:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:49.257 ************************************ 00:26:49.257 START TEST nvmf_initiator_timeout 00:26:49.257 ************************************ 00:26:49.258 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:49.520 * Looking for test storage... 00:26:49.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:49.521 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:49.521 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:49.521 11:21:13 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:49.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.521 --rc genhtml_branch_coverage=1 00:26:49.521 --rc genhtml_function_coverage=1 00:26:49.521 --rc genhtml_legend=1 00:26:49.521 --rc geninfo_all_blocks=1 00:26:49.521 --rc geninfo_unexecuted_blocks=1 00:26:49.521 00:26:49.521 ' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:49.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.521 --rc genhtml_branch_coverage=1 00:26:49.521 --rc genhtml_function_coverage=1 00:26:49.521 --rc genhtml_legend=1 00:26:49.521 --rc geninfo_all_blocks=1 00:26:49.521 --rc geninfo_unexecuted_blocks=1 00:26:49.521 00:26:49.521 ' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:49.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.521 --rc genhtml_branch_coverage=1 00:26:49.521 --rc genhtml_function_coverage=1 00:26:49.521 --rc genhtml_legend=1 00:26:49.521 --rc geninfo_all_blocks=1 00:26:49.521 --rc geninfo_unexecuted_blocks=1 00:26:49.521 00:26:49.521 ' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:49.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.521 --rc genhtml_branch_coverage=1 00:26:49.521 --rc genhtml_function_coverage=1 00:26:49.521 --rc genhtml_legend=1 00:26:49.521 --rc geninfo_all_blocks=1 00:26:49.521 --rc geninfo_unexecuted_blocks=1 00:26:49.521 00:26:49.521 ' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:49.521 11:21:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.422 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.681 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.681 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:26:51.682 00:26:51.682 --- 10.0.0.2 ping statistics --- 00:26:51.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.682 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:26:51.682 00:26:51.682 --- 10.0.0.1 ping statistics --- 00:26:51.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.682 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=304764 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 304764 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 304764 ']' 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.682 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.682 [2024-11-17 11:21:16.294872] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:26:51.682 [2024-11-17 11:21:16.294951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.941 [2024-11-17 11:21:16.374652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.941 [2024-11-17 11:21:16.424117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.941 [2024-11-17 11:21:16.424188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.941 [2024-11-17 11:21:16.424202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.941 [2024-11-17 11:21:16.424213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.941 [2024-11-17 11:21:16.424223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.941 [2024-11-17 11:21:16.425730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.941 [2024-11-17 11:21:16.425812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.941 [2024-11-17 11:21:16.425816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.941 [2024-11-17 11:21:16.425753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.941 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.199 Malloc0 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.199 Delay0 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.199 [2024-11-17 11:21:16.630367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.199 [2024-11-17 11:21:16.658698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.199 11:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:52.764 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:52.764 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:52.764 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:52.764 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:52.764 11:21:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=305188 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:55.290 11:21:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:55.290 [global] 00:26:55.290 thread=1 00:26:55.290 invalidate=1 00:26:55.290 rw=write 00:26:55.290 time_based=1 00:26:55.290 runtime=60 00:26:55.290 ioengine=libaio 00:26:55.290 direct=1 00:26:55.290 bs=4096 00:26:55.290 iodepth=1 00:26:55.290 norandommap=0 00:26:55.290 numjobs=1 00:26:55.290 00:26:55.290 verify_dump=1 00:26:55.290 verify_backlog=512 00:26:55.290 verify_state_save=0 00:26:55.290 do_verify=1 00:26:55.290 verify=crc32c-intel 00:26:55.290 [job0] 00:26:55.290 filename=/dev/nvme0n1 00:26:55.290 Could not set queue depth (nvme0n1) 00:26:55.290 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:55.290 fio-3.35 00:26:55.290 Starting 1 thread 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.818 true 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.818 true 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.818 true 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:57.818 true 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.818 11:21:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 true 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 true 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 true 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:01.099 true 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:01.099 11:21:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 305188 00:27:57.316 00:27:57.316 job0: (groupid=0, jobs=1): err= 0: pid=305263: Sun Nov 17 11:22:19 2024 00:27:57.316 read: IOPS=48, BW=196KiB/s (200kB/s)(11.5MiB/60008msec) 00:27:57.316 slat (nsec): min=4958, max=62544, avg=17203.68, stdev=8264.30 00:27:57.316 clat (usec): min=212, max=41369k, avg=20141.77, stdev=763624.97 00:27:57.316 lat (usec): min=217, max=41369k, avg=20158.97, stdev=763625.06 00:27:57.316 clat percentiles (usec): 00:27:57.316 | 1.00th=[ 229], 5.00th=[ 241], 10.00th=[ 249], 00:27:57.316 | 20.00th=[ 262], 30.00th=[ 273], 40.00th=[ 281], 00:27:57.316 | 50.00th=[ 289], 60.00th=[ 314], 70.00th=[ 338], 00:27:57.316 | 80.00th=[ 355], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:57.316 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 43254], 00:27:57.316 | 99.95th=[ 43779], 99.99th=[17112761] 00:27:57.316 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60008msec); 0 zone resets 00:27:57.316 slat (usec): min=6, max=24811, avg=27.29, stdev=447.39 00:27:57.316 clat (usec): min=163, max=2417, avg=236.00, stdev=56.21 00:27:57.316 lat (usec): min=170, max=25063, avg=263.29, stdev=451.55 00:27:57.316 clat percentiles (usec): 00:27:57.316 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:27:57.316 | 30.00th=[ 210], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 245], 00:27:57.316 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:27:57.316 | 99.00th=[ 359], 99.50th=[ 412], 99.90th=[ 441], 99.95th=[ 469], 00:27:57.316 | 99.99th=[ 2409] 00:27:57.316 bw ( KiB/s): min= 1040, max= 8152, per=100.00%, avg=4915.20, stdev=2818.49, samples=5 00:27:57.316 iops : min= 260, max= 2038, avg=1228.80, stdev=704.62, samples=5 00:27:57.316 lat (usec) : 250=38.34%, 500=54.52%, 750=0.17%, 1000=0.02% 00:27:57.316 lat (msec) : 2=0.02%, 4=0.02%, 50=6.91%, >=2000=0.02% 00:27:57.316 cpu : usr=0.13%, sys=0.22%, ctx=6010, majf=0, minf=1 00:27:57.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.316 issued rwts: total=2935,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:57.316 00:27:57.316 Run status group 0 (all jobs): 00:27:57.316 READ: bw=196KiB/s (200kB/s), 196KiB/s-196KiB/s (200kB/s-200kB/s), io=11.5MiB (12.0MB), run=60008-60008msec 00:27:57.316 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60008-60008msec 00:27:57.316 00:27:57.316 Disk stats (read/write): 00:27:57.316 nvme0n1: ios=2984/3072, merge=0/0, ticks=18889/701, in_queue=19590, util=99.73% 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:57.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:57.316 nvmf hotplug test: fio successful as expected 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.316 rmmod nvme_tcp 00:27:57.316 rmmod nvme_fabrics 00:27:57.316 rmmod nvme_keyring 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 304764 ']' 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 304764 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 304764 ']' 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 304764 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.316 11:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304764 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304764' 00:27:57.316 killing process with pid 304764 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 304764 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 304764 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.316 11:22:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:57.885 00:27:57.885 real 1m8.422s 00:27:57.885 user 4m11.335s 00:27:57.885 sys 0m6.713s 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.885 ************************************ 00:27:57.885 END TEST nvmf_initiator_timeout 00:27:57.885 ************************************ 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:57.885 11:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.417 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:00.418 ************************************ 00:28:00.418 START TEST nvmf_perf_adq 00:28:00.418 ************************************ 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:00.418 * Looking for test storage... 00:28:00.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.418 --rc genhtml_branch_coverage=1 00:28:00.418 --rc genhtml_function_coverage=1 00:28:00.418 --rc genhtml_legend=1 00:28:00.418 --rc geninfo_all_blocks=1 00:28:00.418 --rc geninfo_unexecuted_blocks=1 00:28:00.418 00:28:00.418 ' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.418 --rc genhtml_branch_coverage=1 00:28:00.418 --rc genhtml_function_coverage=1 00:28:00.418 --rc genhtml_legend=1 00:28:00.418 --rc geninfo_all_blocks=1 00:28:00.418 --rc geninfo_unexecuted_blocks=1 00:28:00.418 00:28:00.418 ' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.418 --rc genhtml_branch_coverage=1 00:28:00.418 --rc genhtml_function_coverage=1 00:28:00.418 --rc genhtml_legend=1 00:28:00.418 --rc geninfo_all_blocks=1 00:28:00.418 --rc geninfo_unexecuted_blocks=1 00:28:00.418 00:28:00.418 ' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:00.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:00.418 --rc genhtml_branch_coverage=1 00:28:00.418 --rc genhtml_function_coverage=1 00:28:00.418 --rc genhtml_legend=1 00:28:00.418 --rc geninfo_all_blocks=1 00:28:00.418 --rc geninfo_unexecuted_blocks=1 00:28:00.418 00:28:00.418 ' 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:00.418 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:00.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.419 11:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:02.323 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:02.323 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:02.323 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.323 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:02.323 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:02.324 11:22:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:02.894 11:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:07.080 11:22:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:12.356 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:12.356 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:12.356 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.356 11:22:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.356 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.356 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:12.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:28:12.357 00:28:12.357 --- 10.0.0.2 ping statistics --- 00:28:12.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.357 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:28:12.357 00:28:12.357 --- 10.0.0.1 ping statistics --- 00:28:12.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.357 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=317034 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 317034 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 317034 ']' 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 [2024-11-17 11:22:36.197438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:12.357 [2024-11-17 11:22:36.197521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.357 [2024-11-17 11:22:36.270012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.357 [2024-11-17 11:22:36.316411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.357 [2024-11-17 11:22:36.316464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.357 [2024-11-17 11:22:36.316494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.357 [2024-11-17 11:22:36.316506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.357 [2024-11-17 11:22:36.316515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.357 [2024-11-17 11:22:36.318092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.357 [2024-11-17 11:22:36.318155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.357 [2024-11-17 11:22:36.318186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.357 [2024-11-17 11:22:36.318188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.357 [2024-11-17 11:22:36.637724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:12.357 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.358 Malloc1 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:12.358 [2024-11-17 11:22:36.698571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=317069 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:12.358 11:22:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:14.258 "tick_rate": 2700000000, 00:28:14.258 "poll_groups": [ 00:28:14.258 { 00:28:14.258 "name": "nvmf_tgt_poll_group_000", 00:28:14.258 "admin_qpairs": 1, 00:28:14.258 "io_qpairs": 1, 00:28:14.258 "current_admin_qpairs": 1, 00:28:14.258 "current_io_qpairs": 1, 00:28:14.258 "pending_bdev_io": 0, 00:28:14.258 "completed_nvme_io": 19920, 00:28:14.258 "transports": [ 00:28:14.258 { 00:28:14.258 "trtype": "TCP" 00:28:14.258 } 00:28:14.258 ] 00:28:14.258 }, 00:28:14.258 { 00:28:14.258 "name": "nvmf_tgt_poll_group_001", 00:28:14.258 "admin_qpairs": 0, 00:28:14.258 "io_qpairs": 1, 00:28:14.258 "current_admin_qpairs": 0, 00:28:14.258 "current_io_qpairs": 1, 00:28:14.258 "pending_bdev_io": 0, 00:28:14.258 "completed_nvme_io": 19518, 00:28:14.258 "transports": [ 00:28:14.258 { 00:28:14.258 "trtype": "TCP" 00:28:14.258 } 00:28:14.258 ] 00:28:14.258 }, 00:28:14.258 { 00:28:14.258 "name": "nvmf_tgt_poll_group_002", 00:28:14.258 "admin_qpairs": 0, 00:28:14.258 "io_qpairs": 1, 00:28:14.258 "current_admin_qpairs": 0, 00:28:14.258 "current_io_qpairs": 1, 00:28:14.258 "pending_bdev_io": 0, 00:28:14.258 "completed_nvme_io": 19846, 00:28:14.258 "transports": [ 00:28:14.258 { 00:28:14.258 "trtype": "TCP" 00:28:14.258 } 00:28:14.258 ] 00:28:14.258 }, 00:28:14.258 { 00:28:14.258 "name": "nvmf_tgt_poll_group_003", 00:28:14.258 "admin_qpairs": 0, 00:28:14.258 "io_qpairs": 1, 00:28:14.258 "current_admin_qpairs": 0, 00:28:14.258 "current_io_qpairs": 1, 00:28:14.258 "pending_bdev_io": 0, 00:28:14.258 "completed_nvme_io": 20005, 00:28:14.258 "transports": [ 00:28:14.258 { 00:28:14.258 "trtype": "TCP" 00:28:14.258 } 00:28:14.258 ] 00:28:14.258 } 00:28:14.258 ] 00:28:14.258 }' 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:14.258 11:22:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 317069 00:28:22.440 Initializing NVMe Controllers 00:28:22.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:22.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:22.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:22.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:22.440 Initialization complete. Launching workers. 00:28:22.440 ======================================================== 00:28:22.440 Latency(us) 00:28:22.440 Device Information : IOPS MiB/s Average min max 00:28:22.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10437.20 40.77 6133.50 2464.03 10043.75 00:28:22.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10309.50 40.27 6207.55 2489.66 10243.64 00:28:22.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10445.20 40.80 6127.16 2425.92 10308.65 00:28:22.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10533.70 41.15 6076.40 2523.83 9831.59 00:28:22.440 ======================================================== 00:28:22.440 Total : 41725.60 162.99 6135.79 2425.92 10308.65 00:28:22.440 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.440 rmmod nvme_tcp 00:28:22.440 rmmod nvme_fabrics 00:28:22.440 rmmod nvme_keyring 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 317034 ']' 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 317034 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 317034 ']' 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 317034 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317034 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317034' 00:28:22.440 killing process with pid 317034 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 317034 00:28:22.440 11:22:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 317034 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.720 11:22:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.754 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.754 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:24.754 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:24.754 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:25.319 11:22:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:27.851 11:22:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:33.139 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:33.139 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:33.139 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:33.139 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:33.139 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:28:33.140 00:28:33.140 --- 10.0.0.2 ping statistics --- 00:28:33.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.140 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:28:33.140 00:28:33.140 --- 10.0.0.1 ping statistics --- 00:28:33.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.140 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:33.140 net.core.busy_poll = 1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:33.140 net.core.busy_read = 1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=319836 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 319836 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 319836 ']' 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.140 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.140 [2024-11-17 11:22:57.708440] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:33.140 [2024-11-17 11:22:57.708552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.140 [2024-11-17 11:22:57.778990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.399 [2024-11-17 11:22:57.825491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.399 [2024-11-17 11:22:57.825563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.399 [2024-11-17 11:22:57.825604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.399 [2024-11-17 11:22:57.825617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.399 [2024-11-17 11:22:57.825627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.399 [2024-11-17 11:22:57.827017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.399 [2024-11-17 11:22:57.827076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.399 [2024-11-17 11:22:57.827142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.399 [2024-11-17 11:22:57.827144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.399 11:22:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.399 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.399 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:33.399 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.399 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.657 [2024-11-17 11:22:58.116907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.657 Malloc1 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.657 [2024-11-17 11:22:58.182548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=319869 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:33.657 11:22:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:35.556 "tick_rate": 2700000000, 00:28:35.556 "poll_groups": [ 00:28:35.556 { 00:28:35.556 "name": "nvmf_tgt_poll_group_000", 00:28:35.556 "admin_qpairs": 1, 00:28:35.556 "io_qpairs": 3, 00:28:35.556 "current_admin_qpairs": 1, 00:28:35.556 "current_io_qpairs": 3, 00:28:35.556 "pending_bdev_io": 0, 00:28:35.556 "completed_nvme_io": 26350, 00:28:35.556 "transports": [ 00:28:35.556 { 00:28:35.556 "trtype": "TCP" 00:28:35.556 } 00:28:35.556 ] 00:28:35.556 }, 00:28:35.556 { 00:28:35.556 "name": "nvmf_tgt_poll_group_001", 00:28:35.556 "admin_qpairs": 0, 00:28:35.556 "io_qpairs": 1, 00:28:35.556 "current_admin_qpairs": 0, 00:28:35.556 "current_io_qpairs": 1, 00:28:35.556 "pending_bdev_io": 0, 00:28:35.556 "completed_nvme_io": 25087, 00:28:35.556 "transports": [ 00:28:35.556 { 00:28:35.556 "trtype": "TCP" 00:28:35.556 } 00:28:35.556 ] 00:28:35.556 }, 00:28:35.556 { 00:28:35.556 "name": "nvmf_tgt_poll_group_002", 00:28:35.556 "admin_qpairs": 0, 00:28:35.556 "io_qpairs": 0, 00:28:35.556 "current_admin_qpairs": 0, 00:28:35.556 "current_io_qpairs": 0, 00:28:35.556 "pending_bdev_io": 0, 00:28:35.556 "completed_nvme_io": 0, 00:28:35.556 "transports": [ 00:28:35.556 { 00:28:35.556 "trtype": "TCP" 00:28:35.556 } 00:28:35.556 ] 00:28:35.556 }, 00:28:35.556 { 00:28:35.556 "name": "nvmf_tgt_poll_group_003", 00:28:35.556 "admin_qpairs": 0, 00:28:35.556 "io_qpairs": 0, 00:28:35.556 "current_admin_qpairs": 0, 00:28:35.556 "current_io_qpairs": 0, 00:28:35.556 "pending_bdev_io": 0, 00:28:35.556 "completed_nvme_io": 0, 00:28:35.556 "transports": [ 00:28:35.556 { 00:28:35.556 "trtype": "TCP" 00:28:35.556 } 00:28:35.556 ] 00:28:35.556 } 00:28:35.556 ] 00:28:35.556 }' 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:35.556 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:35.814 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:35.814 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:35.814 11:23:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 319869 00:28:43.924 Initializing NVMe Controllers 00:28:43.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:43.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:43.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:43.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:43.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:43.924 Initialization complete. Launching workers. 00:28:43.924 ======================================================== 00:28:43.924 Latency(us) 00:28:43.924 Device Information : IOPS MiB/s Average min max 00:28:43.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4676.40 18.27 13708.49 1874.80 61822.26 00:28:43.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13564.39 52.99 4718.36 1679.76 7340.66 00:28:43.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4732.30 18.49 13543.86 1908.18 61276.54 00:28:43.924 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4398.60 17.18 14594.94 1800.38 62463.00 00:28:43.924 ======================================================== 00:28:43.924 Total : 27371.69 106.92 9367.30 1679.76 62463.00 00:28:43.924 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.924 rmmod nvme_tcp 00:28:43.924 rmmod nvme_fabrics 00:28:43.924 rmmod nvme_keyring 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 319836 ']' 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 319836 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 319836 ']' 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 319836 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 319836 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 319836' 00:28:43.924 killing process with pid 319836 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 319836 00:28:43.924 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 319836 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.184 11:23:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:47.472 00:28:47.472 real 0m47.252s 00:28:47.472 user 2m40.838s 00:28:47.472 sys 0m10.393s 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.472 ************************************ 00:28:47.472 END TEST nvmf_perf_adq 00:28:47.472 ************************************ 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:47.472 ************************************ 00:28:47.472 START TEST nvmf_shutdown 00:28:47.472 ************************************ 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:47.472 * Looking for test storage... 00:28:47.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:47.472 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.473 --rc genhtml_branch_coverage=1 00:28:47.473 --rc genhtml_function_coverage=1 00:28:47.473 --rc genhtml_legend=1 00:28:47.473 --rc geninfo_all_blocks=1 00:28:47.473 --rc geninfo_unexecuted_blocks=1 00:28:47.473 00:28:47.473 ' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.473 --rc genhtml_branch_coverage=1 00:28:47.473 --rc genhtml_function_coverage=1 00:28:47.473 --rc genhtml_legend=1 00:28:47.473 --rc geninfo_all_blocks=1 00:28:47.473 --rc geninfo_unexecuted_blocks=1 00:28:47.473 00:28:47.473 ' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.473 --rc genhtml_branch_coverage=1 00:28:47.473 --rc genhtml_function_coverage=1 00:28:47.473 --rc genhtml_legend=1 00:28:47.473 --rc geninfo_all_blocks=1 00:28:47.473 --rc geninfo_unexecuted_blocks=1 00:28:47.473 00:28:47.473 ' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:47.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.473 --rc genhtml_branch_coverage=1 00:28:47.473 --rc genhtml_function_coverage=1 00:28:47.473 --rc genhtml_legend=1 00:28:47.473 --rc geninfo_all_blocks=1 00:28:47.473 --rc geninfo_unexecuted_blocks=1 00:28:47.473 00:28:47.473 ' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:47.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.473 11:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:47.473 ************************************ 00:28:47.473 START TEST nvmf_shutdown_tc1 00:28:47.473 ************************************ 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:47.473 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.474 11:23:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:50.008 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:50.008 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:50.008 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:50.008 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.008 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:28:50.009 00:28:50.009 --- 10.0.0.2 ping statistics --- 00:28:50.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.009 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:50.009 00:28:50.009 --- 10.0.0.1 ping statistics --- 00:28:50.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.009 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=323170 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 323170 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 323170 ']' 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.009 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.009 [2024-11-17 11:23:14.500742] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:50.009 [2024-11-17 11:23:14.500834] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.009 [2024-11-17 11:23:14.574007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.009 [2024-11-17 11:23:14.620856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.009 [2024-11-17 11:23:14.620924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.009 [2024-11-17 11:23:14.620937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.009 [2024-11-17 11:23:14.620948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.009 [2024-11-17 11:23:14.620958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.009 [2024-11-17 11:23:14.622450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.009 [2024-11-17 11:23:14.622548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.009 [2024-11-17 11:23:14.622648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:50.009 [2024-11-17 11:23:14.622651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.267 [2024-11-17 11:23:14.780471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.267 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.268 11:23:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.268 Malloc1 00:28:50.268 [2024-11-17 11:23:14.878249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.268 Malloc2 00:28:50.526 Malloc3 00:28:50.526 Malloc4 00:28:50.526 Malloc5 00:28:50.526 Malloc6 00:28:50.526 Malloc7 00:28:50.784 Malloc8 00:28:50.784 Malloc9 00:28:50.784 Malloc10 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=323350 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 323350 /var/tmp/bdevperf.sock 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 323350 ']' 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.784 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.785 { 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme$subsystem", 00:28:50.785 "trtype": "$TEST_TRANSPORT", 00:28:50.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.785 "adrfam": "ipv4", 00:28:50.785 "trsvcid": "$NVMF_PORT", 00:28:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.785 "hdgst": ${hdgst:-false}, 00:28:50.785 "ddgst": ${ddgst:-false} 00:28:50.785 }, 00:28:50.785 "method": "bdev_nvme_attach_controller" 00:28:50.785 } 00:28:50.785 EOF 00:28:50.785 )") 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:50.785 11:23:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.785 "params": { 00:28:50.785 "name": "Nvme1", 00:28:50.785 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme2", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme3", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme4", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme5", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme6", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme7", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme8", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme9", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 },{ 00:28:50.786 "params": { 00:28:50.786 "name": "Nvme10", 00:28:50.786 "trtype": "tcp", 00:28:50.786 "traddr": "10.0.0.2", 00:28:50.786 "adrfam": "ipv4", 00:28:50.786 "trsvcid": "4420", 00:28:50.786 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:50.786 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:50.786 "hdgst": false, 00:28:50.786 "ddgst": false 00:28:50.786 }, 00:28:50.786 "method": "bdev_nvme_attach_controller" 00:28:50.786 }' 00:28:50.786 [2024-11-17 11:23:15.406073] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:50.786 [2024-11-17 11:23:15.406162] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:51.044 [2024-11-17 11:23:15.477749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.044 [2024-11-17 11:23:15.524467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 323350 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:52.942 11:23:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:53.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 323350 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 323170 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.876 { 00:28:53.876 "params": { 00:28:53.876 "name": "Nvme$subsystem", 00:28:53.876 "trtype": "$TEST_TRANSPORT", 00:28:53.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.876 "adrfam": "ipv4", 00:28:53.876 "trsvcid": "$NVMF_PORT", 00:28:53.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.876 "hdgst": ${hdgst:-false}, 00:28:53.876 "ddgst": ${ddgst:-false} 00:28:53.876 }, 00:28:53.876 "method": "bdev_nvme_attach_controller" 00:28:53.876 } 00:28:53.876 EOF 00:28:53.876 )") 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.876 { 00:28:53.876 "params": { 00:28:53.876 "name": "Nvme$subsystem", 00:28:53.876 "trtype": "$TEST_TRANSPORT", 00:28:53.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.876 "adrfam": "ipv4", 00:28:53.876 "trsvcid": "$NVMF_PORT", 00:28:53.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.876 "hdgst": ${hdgst:-false}, 00:28:53.876 "ddgst": ${ddgst:-false} 00:28:53.876 }, 00:28:53.876 "method": "bdev_nvme_attach_controller" 00:28:53.876 } 00:28:53.876 EOF 00:28:53.876 )") 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.876 { 00:28:53.876 "params": { 00:28:53.876 "name": "Nvme$subsystem", 00:28:53.876 "trtype": "$TEST_TRANSPORT", 00:28:53.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.876 "adrfam": "ipv4", 00:28:53.876 "trsvcid": "$NVMF_PORT", 00:28:53.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.876 "hdgst": ${hdgst:-false}, 00:28:53.876 "ddgst": ${ddgst:-false} 00:28:53.876 }, 00:28:53.876 "method": "bdev_nvme_attach_controller" 00:28:53.876 } 00:28:53.876 EOF 00:28:53.876 )") 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.876 { 00:28:53.876 "params": { 00:28:53.876 "name": "Nvme$subsystem", 00:28:53.876 "trtype": "$TEST_TRANSPORT", 00:28:53.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.876 "adrfam": "ipv4", 00:28:53.876 "trsvcid": "$NVMF_PORT", 00:28:53.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.876 "hdgst": ${hdgst:-false}, 00:28:53.876 "ddgst": ${ddgst:-false} 00:28:53.876 }, 00:28:53.876 "method": "bdev_nvme_attach_controller" 00:28:53.876 } 00:28:53.876 EOF 00:28:53.876 )") 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.876 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.876 { 00:28:53.876 "params": { 00:28:53.876 "name": "Nvme$subsystem", 00:28:53.876 "trtype": "$TEST_TRANSPORT", 00:28:53.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.876 "adrfam": "ipv4", 00:28:53.876 "trsvcid": "$NVMF_PORT", 00:28:53.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.876 "hdgst": ${hdgst:-false}, 00:28:53.876 "ddgst": ${ddgst:-false} 00:28:53.876 }, 00:28:53.876 "method": "bdev_nvme_attach_controller" 00:28:53.877 } 00:28:53.877 EOF 00:28:53.877 )") 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.877 { 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme$subsystem", 00:28:53.877 "trtype": "$TEST_TRANSPORT", 00:28:53.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "$NVMF_PORT", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.877 "hdgst": ${hdgst:-false}, 00:28:53.877 "ddgst": ${ddgst:-false} 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 } 00:28:53.877 EOF 00:28:53.877 )") 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.877 { 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme$subsystem", 00:28:53.877 "trtype": "$TEST_TRANSPORT", 00:28:53.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "$NVMF_PORT", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.877 "hdgst": ${hdgst:-false}, 00:28:53.877 "ddgst": ${ddgst:-false} 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 } 00:28:53.877 EOF 00:28:53.877 )") 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.877 { 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme$subsystem", 00:28:53.877 "trtype": "$TEST_TRANSPORT", 00:28:53.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "$NVMF_PORT", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.877 "hdgst": ${hdgst:-false}, 00:28:53.877 "ddgst": ${ddgst:-false} 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 } 00:28:53.877 EOF 00:28:53.877 )") 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.877 { 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme$subsystem", 00:28:53.877 "trtype": "$TEST_TRANSPORT", 00:28:53.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "$NVMF_PORT", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.877 "hdgst": ${hdgst:-false}, 00:28:53.877 "ddgst": ${ddgst:-false} 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 } 00:28:53.877 EOF 00:28:53.877 )") 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:53.877 { 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme$subsystem", 00:28:53.877 "trtype": "$TEST_TRANSPORT", 00:28:53.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "$NVMF_PORT", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.877 "hdgst": ${hdgst:-false}, 00:28:53.877 "ddgst": ${ddgst:-false} 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 } 00:28:53.877 EOF 00:28:53.877 )") 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:53.877 11:23:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme1", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme2", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme3", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme4", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme5", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme6", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme7", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme8", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.877 },{ 00:28:53.877 "params": { 00:28:53.877 "name": "Nvme9", 00:28:53.877 "trtype": "tcp", 00:28:53.877 "traddr": "10.0.0.2", 00:28:53.877 "adrfam": "ipv4", 00:28:53.877 "trsvcid": "4420", 00:28:53.877 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:53.877 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:53.877 "hdgst": false, 00:28:53.877 "ddgst": false 00:28:53.877 }, 00:28:53.877 "method": "bdev_nvme_attach_controller" 00:28:53.878 },{ 00:28:53.878 "params": { 00:28:53.878 "name": "Nvme10", 00:28:53.878 "trtype": "tcp", 00:28:53.878 "traddr": "10.0.0.2", 00:28:53.878 "adrfam": "ipv4", 00:28:53.878 "trsvcid": "4420", 00:28:53.878 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:53.878 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:53.878 "hdgst": false, 00:28:53.878 "ddgst": false 00:28:53.878 }, 00:28:53.878 "method": "bdev_nvme_attach_controller" 00:28:53.878 }' 00:28:53.878 [2024-11-17 11:23:18.460480] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:53.878 [2024-11-17 11:23:18.460582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323772 ] 00:28:54.136 [2024-11-17 11:23:18.532999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.136 [2024-11-17 11:23:18.579746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.516 Running I/O for 1 seconds... 00:28:56.449 1823.00 IOPS, 113.94 MiB/s 00:28:56.449 Latency(us) 00:28:56.449 [2024-11-17T10:23:21.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.449 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme1n1 : 1.15 223.39 13.96 0.00 0.00 283719.68 28156.21 253211.69 00:28:56.449 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme2n1 : 1.15 222.35 13.90 0.00 0.00 280510.20 20777.34 253211.69 00:28:56.449 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme3n1 : 1.09 235.02 14.69 0.00 0.00 259485.01 16117.00 259425.47 00:28:56.449 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme4n1 : 1.10 237.62 14.85 0.00 0.00 251637.87 9757.58 250104.79 00:28:56.449 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme5n1 : 1.11 238.67 14.92 0.00 0.00 246755.31 3276.80 234570.33 00:28:56.449 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme6n1 : 1.11 234.86 14.68 0.00 0.00 245669.14 13689.74 250104.79 00:28:56.449 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme7n1 : 1.12 232.30 14.52 0.00 0.00 245293.08 2160.26 273406.48 00:28:56.449 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme8n1 : 1.19 269.31 16.83 0.00 0.00 209784.53 15825.73 254765.13 00:28:56.449 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme9n1 : 1.16 228.20 14.26 0.00 0.00 242195.99 983.04 276513.37 00:28:56.449 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.449 Verification LBA range: start 0x0 length 0x400 00:28:56.449 Nvme10n1 : 1.20 266.86 16.68 0.00 0.00 204748.91 5776.88 262532.36 00:28:56.449 [2024-11-17T10:23:21.107Z] =================================================================================================================== 00:28:56.449 [2024-11-17T10:23:21.107Z] Total : 2388.57 149.29 0.00 0.00 245099.28 983.04 276513.37 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.707 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.708 rmmod nvme_tcp 00:28:56.708 rmmod nvme_fabrics 00:28:56.708 rmmod nvme_keyring 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 323170 ']' 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 323170 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 323170 ']' 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 323170 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.708 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323170 00:28:56.965 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:56.965 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:56.966 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323170' 00:28:56.966 killing process with pid 323170 00:28:56.966 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 323170 00:28:56.966 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 323170 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.224 11:23:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.758 00:28:59.758 real 0m11.901s 00:28:59.758 user 0m33.632s 00:28:59.758 sys 0m3.314s 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:59.758 ************************************ 00:28:59.758 END TEST nvmf_shutdown_tc1 00:28:59.758 ************************************ 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:59.758 ************************************ 00:28:59.758 START TEST nvmf_shutdown_tc2 00:28:59.758 ************************************ 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.758 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:59.759 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:59.759 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:59.759 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:59.759 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.759 11:23:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:28:59.759 00:28:59.759 --- 10.0.0.2 ping statistics --- 00:28:59.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.759 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:28:59.759 00:28:59.759 --- 10.0.0.1 ping statistics --- 00:28:59.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.759 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=324533 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 324533 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 324533 ']' 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.759 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.759 [2024-11-17 11:23:24.180991] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:28:59.759 [2024-11-17 11:23:24.181068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.759 [2024-11-17 11:23:24.252171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.759 [2024-11-17 11:23:24.300230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.759 [2024-11-17 11:23:24.300299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.759 [2024-11-17 11:23:24.300337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.760 [2024-11-17 11:23:24.300354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.760 [2024-11-17 11:23:24.300364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.760 [2024-11-17 11:23:24.301821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.760 [2024-11-17 11:23:24.301879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.760 [2024-11-17 11:23:24.301945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.760 [2024-11-17 11:23:24.301948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 [2024-11-17 11:23:24.441198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.017 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 Malloc1 00:29:00.017 [2024-11-17 11:23:24.531539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.017 Malloc2 00:29:00.017 Malloc3 00:29:00.017 Malloc4 00:29:00.275 Malloc5 00:29:00.275 Malloc6 00:29:00.275 Malloc7 00:29:00.275 Malloc8 00:29:00.275 Malloc9 00:29:00.534 Malloc10 00:29:00.534 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.534 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:00.534 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.534 11:23:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=324705 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 324705 /var/tmp/bdevperf.sock 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 324705 ']' 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.534 { 00:29:00.534 "params": { 00:29:00.534 "name": "Nvme$subsystem", 00:29:00.534 "trtype": "$TEST_TRANSPORT", 00:29:00.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.534 "adrfam": "ipv4", 00:29:00.534 "trsvcid": "$NVMF_PORT", 00:29:00.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.534 "hdgst": ${hdgst:-false}, 00:29:00.534 "ddgst": ${ddgst:-false} 00:29:00.534 }, 00:29:00.534 "method": "bdev_nvme_attach_controller" 00:29:00.534 } 00:29:00.534 EOF 00:29:00.534 )") 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.534 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.534 { 00:29:00.534 "params": { 00:29:00.534 "name": "Nvme$subsystem", 00:29:00.534 "trtype": "$TEST_TRANSPORT", 00:29:00.534 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.534 "adrfam": "ipv4", 00:29:00.534 "trsvcid": "$NVMF_PORT", 00:29:00.534 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.534 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.535 { 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme$subsystem", 00:29:00.535 "trtype": "$TEST_TRANSPORT", 00:29:00.535 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "$NVMF_PORT", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.535 "hdgst": ${hdgst:-false}, 00:29:00.535 "ddgst": ${ddgst:-false} 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 } 00:29:00.535 EOF 00:29:00.535 )") 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:00.535 11:23:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme1", 00:29:00.535 "trtype": "tcp", 00:29:00.535 "traddr": "10.0.0.2", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "4420", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.535 "hdgst": false, 00:29:00.535 "ddgst": false 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 },{ 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme2", 00:29:00.535 "trtype": "tcp", 00:29:00.535 "traddr": "10.0.0.2", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "4420", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:00.535 "hdgst": false, 00:29:00.535 "ddgst": false 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 },{ 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme3", 00:29:00.535 "trtype": "tcp", 00:29:00.535 "traddr": "10.0.0.2", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "4420", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:00.535 "hdgst": false, 00:29:00.535 "ddgst": false 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.535 },{ 00:29:00.535 "params": { 00:29:00.535 "name": "Nvme4", 00:29:00.535 "trtype": "tcp", 00:29:00.535 "traddr": "10.0.0.2", 00:29:00.535 "adrfam": "ipv4", 00:29:00.535 "trsvcid": "4420", 00:29:00.535 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:00.535 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:00.535 "hdgst": false, 00:29:00.535 "ddgst": false 00:29:00.535 }, 00:29:00.535 "method": "bdev_nvme_attach_controller" 00:29:00.536 },{ 00:29:00.536 "params": { 00:29:00.536 "name": "Nvme5", 00:29:00.536 "trtype": "tcp", 00:29:00.536 "traddr": "10.0.0.2", 00:29:00.536 "adrfam": "ipv4", 00:29:00.536 "trsvcid": "4420", 00:29:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:00.536 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:00.536 "hdgst": false, 00:29:00.536 "ddgst": false 00:29:00.536 }, 00:29:00.536 "method": "bdev_nvme_attach_controller" 00:29:00.536 },{ 00:29:00.536 "params": { 00:29:00.536 "name": "Nvme6", 00:29:00.536 "trtype": "tcp", 00:29:00.536 "traddr": "10.0.0.2", 00:29:00.536 "adrfam": "ipv4", 00:29:00.536 "trsvcid": "4420", 00:29:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:00.536 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:00.536 "hdgst": false, 00:29:00.536 "ddgst": false 00:29:00.536 }, 00:29:00.536 "method": "bdev_nvme_attach_controller" 00:29:00.536 },{ 00:29:00.536 "params": { 00:29:00.536 "name": "Nvme7", 00:29:00.536 "trtype": "tcp", 00:29:00.536 "traddr": "10.0.0.2", 00:29:00.536 "adrfam": "ipv4", 00:29:00.536 "trsvcid": "4420", 00:29:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:00.536 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:00.536 "hdgst": false, 00:29:00.536 "ddgst": false 00:29:00.536 }, 00:29:00.536 "method": "bdev_nvme_attach_controller" 00:29:00.536 },{ 00:29:00.536 "params": { 00:29:00.536 "name": "Nvme8", 00:29:00.536 "trtype": "tcp", 00:29:00.536 "traddr": "10.0.0.2", 00:29:00.536 "adrfam": "ipv4", 00:29:00.536 "trsvcid": "4420", 00:29:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:00.536 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:00.536 "hdgst": false, 00:29:00.536 "ddgst": false 00:29:00.536 }, 00:29:00.536 "method": "bdev_nvme_attach_controller" 00:29:00.536 },{ 00:29:00.536 "params": { 00:29:00.536 "name": "Nvme9", 00:29:00.536 "trtype": "tcp", 00:29:00.536 "traddr": "10.0.0.2", 00:29:00.536 "adrfam": "ipv4", 00:29:00.536 "trsvcid": "4420", 00:29:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:00.536 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:00.536 "hdgst": false, 00:29:00.536 "ddgst": false 00:29:00.536 }, 00:29:00.536 "method": "bdev_nvme_attach_controller" 00:29:00.536 },{ 00:29:00.536 "params": { 00:29:00.536 "name": "Nvme10", 00:29:00.536 "trtype": "tcp", 00:29:00.536 "traddr": "10.0.0.2", 00:29:00.536 "adrfam": "ipv4", 00:29:00.536 "trsvcid": "4420", 00:29:00.536 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:00.536 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:00.536 "hdgst": false, 00:29:00.536 "ddgst": false 00:29:00.536 }, 00:29:00.536 "method": "bdev_nvme_attach_controller" 00:29:00.536 }' 00:29:00.536 [2024-11-17 11:23:25.050452] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:00.536 [2024-11-17 11:23:25.050560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324705 ] 00:29:00.536 [2024-11-17 11:23:25.122797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.536 [2024-11-17 11:23:25.170096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.435 Running I/O for 10 seconds... 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:02.435 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.436 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.694 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.694 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:02.694 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:02.694 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:02.952 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:03.211 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 324705 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 324705 ']' 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 324705 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324705 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324705' 00:29:03.212 killing process with pid 324705 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 324705 00:29:03.212 11:23:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 324705 00:29:03.212 1736.00 IOPS, 108.50 MiB/s [2024-11-17T10:23:27.870Z] Received shutdown signal, test time was about 1.065858 seconds 00:29:03.212 00:29:03.212 Latency(us) 00:29:03.212 [2024-11-17T10:23:27.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.212 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme1n1 : 1.02 187.97 11.75 0.00 0.00 336891.45 21942.42 274959.93 00:29:03.212 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme2n1 : 1.04 245.54 15.35 0.00 0.00 253137.35 31457.28 240784.12 00:29:03.212 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme3n1 : 1.05 248.06 15.50 0.00 0.00 245590.84 2949.12 254765.13 00:29:03.212 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme4n1 : 1.04 247.16 15.45 0.00 0.00 242394.83 20097.71 262532.36 00:29:03.212 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme5n1 : 1.06 241.93 15.12 0.00 0.00 243226.74 19029.71 260978.92 00:29:03.212 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme6n1 : 1.03 186.98 11.69 0.00 0.00 308068.00 22039.51 276513.37 00:29:03.212 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme7n1 : 1.07 240.37 15.02 0.00 0.00 236085.85 19806.44 262532.36 00:29:03.212 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme8n1 : 1.05 246.65 15.42 0.00 0.00 224832.50 2318.03 259425.47 00:29:03.212 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme9n1 : 1.03 186.14 11.63 0.00 0.00 291855.42 22233.69 292047.83 00:29:03.212 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.212 Verification LBA range: start 0x0 length 0x400 00:29:03.212 Nvme10n1 : 1.06 241.35 15.08 0.00 0.00 221699.22 23107.51 254765.13 00:29:03.212 [2024-11-17T10:23:27.870Z] =================================================================================================================== 00:29:03.212 [2024-11-17T10:23:27.870Z] Total : 2272.15 142.01 0.00 0.00 256100.08 2318.03 292047.83 00:29:03.470 11:23:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 324533 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.404 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.404 rmmod nvme_tcp 00:29:04.662 rmmod nvme_fabrics 00:29:04.662 rmmod nvme_keyring 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 324533 ']' 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 324533 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 324533 ']' 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 324533 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324533 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324533' 00:29:04.662 killing process with pid 324533 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 324533 00:29:04.662 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 324533 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.227 11:23:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.131 00:29:07.131 real 0m7.694s 00:29:07.131 user 0m23.732s 00:29:07.131 sys 0m1.492s 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.131 ************************************ 00:29:07.131 END TEST nvmf_shutdown_tc2 00:29:07.131 ************************************ 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.131 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.131 ************************************ 00:29:07.131 START TEST nvmf_shutdown_tc3 00:29:07.131 ************************************ 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:07.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:07.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:07.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.132 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:07.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.133 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:29:07.391 00:29:07.391 --- 10.0.0.2 ping statistics --- 00:29:07.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.391 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:29:07.391 00:29:07.391 --- 10.0.0.1 ping statistics --- 00:29:07.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.391 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=325621 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 325621 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 325621 ']' 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.391 11:23:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.391 [2024-11-17 11:23:31.918614] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:07.391 [2024-11-17 11:23:31.918696] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.391 [2024-11-17 11:23:31.989996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.391 [2024-11-17 11:23:32.036414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.391 [2024-11-17 11:23:32.036470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.391 [2024-11-17 11:23:32.036493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.391 [2024-11-17 11:23:32.036512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.391 [2024-11-17 11:23:32.036529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.391 [2024-11-17 11:23:32.038067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.391 [2024-11-17 11:23:32.038130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.391 [2024-11-17 11:23:32.038197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.391 [2024-11-17 11:23:32.038200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.649 [2024-11-17 11:23:32.181603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.649 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.649 Malloc1 00:29:07.650 [2024-11-17 11:23:32.283616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.907 Malloc2 00:29:07.907 Malloc3 00:29:07.907 Malloc4 00:29:07.907 Malloc5 00:29:07.907 Malloc6 00:29:07.907 Malloc7 00:29:08.246 Malloc8 00:29:08.246 Malloc9 00:29:08.246 Malloc10 00:29:08.246 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.246 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:08.246 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.246 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=325723 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 325723 /var/tmp/bdevperf.sock 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 325723 ']' 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.247 { 00:29:08.247 "params": { 00:29:08.247 "name": "Nvme$subsystem", 00:29:08.247 "trtype": "$TEST_TRANSPORT", 00:29:08.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.247 "adrfam": "ipv4", 00:29:08.247 "trsvcid": "$NVMF_PORT", 00:29:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.247 "hdgst": ${hdgst:-false}, 00:29:08.247 "ddgst": ${ddgst:-false} 00:29:08.247 }, 00:29:08.247 "method": "bdev_nvme_attach_controller" 00:29:08.247 } 00:29:08.247 EOF 00:29:08.247 )") 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.247 { 00:29:08.247 "params": { 00:29:08.247 "name": "Nvme$subsystem", 00:29:08.247 "trtype": "$TEST_TRANSPORT", 00:29:08.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.247 "adrfam": "ipv4", 00:29:08.247 "trsvcid": "$NVMF_PORT", 00:29:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.247 "hdgst": ${hdgst:-false}, 00:29:08.247 "ddgst": ${ddgst:-false} 00:29:08.247 }, 00:29:08.247 "method": "bdev_nvme_attach_controller" 00:29:08.247 } 00:29:08.247 EOF 00:29:08.247 )") 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.247 { 00:29:08.247 "params": { 00:29:08.247 "name": "Nvme$subsystem", 00:29:08.247 "trtype": "$TEST_TRANSPORT", 00:29:08.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.247 "adrfam": "ipv4", 00:29:08.247 "trsvcid": "$NVMF_PORT", 00:29:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.247 "hdgst": ${hdgst:-false}, 00:29:08.247 "ddgst": ${ddgst:-false} 00:29:08.247 }, 00:29:08.247 "method": "bdev_nvme_attach_controller" 00:29:08.247 } 00:29:08.247 EOF 00:29:08.247 )") 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.247 { 00:29:08.247 "params": { 00:29:08.247 "name": "Nvme$subsystem", 00:29:08.247 "trtype": "$TEST_TRANSPORT", 00:29:08.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.247 "adrfam": "ipv4", 00:29:08.247 "trsvcid": "$NVMF_PORT", 00:29:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.247 "hdgst": ${hdgst:-false}, 00:29:08.247 "ddgst": ${ddgst:-false} 00:29:08.247 }, 00:29:08.247 "method": "bdev_nvme_attach_controller" 00:29:08.247 } 00:29:08.247 EOF 00:29:08.247 )") 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.247 { 00:29:08.247 "params": { 00:29:08.247 "name": "Nvme$subsystem", 00:29:08.247 "trtype": "$TEST_TRANSPORT", 00:29:08.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.247 "adrfam": "ipv4", 00:29:08.247 "trsvcid": "$NVMF_PORT", 00:29:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.247 "hdgst": ${hdgst:-false}, 00:29:08.247 "ddgst": ${ddgst:-false} 00:29:08.247 }, 00:29:08.247 "method": "bdev_nvme_attach_controller" 00:29:08.247 } 00:29:08.247 EOF 00:29:08.247 )") 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.247 { 00:29:08.247 "params": { 00:29:08.247 "name": "Nvme$subsystem", 00:29:08.247 "trtype": "$TEST_TRANSPORT", 00:29:08.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.247 "adrfam": "ipv4", 00:29:08.247 "trsvcid": "$NVMF_PORT", 00:29:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.247 "hdgst": ${hdgst:-false}, 00:29:08.247 "ddgst": ${ddgst:-false} 00:29:08.247 }, 00:29:08.247 "method": "bdev_nvme_attach_controller" 00:29:08.247 } 00:29:08.247 EOF 00:29:08.247 )") 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.247 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.248 { 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme$subsystem", 00:29:08.248 "trtype": "$TEST_TRANSPORT", 00:29:08.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "$NVMF_PORT", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.248 "hdgst": ${hdgst:-false}, 00:29:08.248 "ddgst": ${ddgst:-false} 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 } 00:29:08.248 EOF 00:29:08.248 )") 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.248 { 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme$subsystem", 00:29:08.248 "trtype": "$TEST_TRANSPORT", 00:29:08.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "$NVMF_PORT", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.248 "hdgst": ${hdgst:-false}, 00:29:08.248 "ddgst": ${ddgst:-false} 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 } 00:29:08.248 EOF 00:29:08.248 )") 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.248 { 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme$subsystem", 00:29:08.248 "trtype": "$TEST_TRANSPORT", 00:29:08.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "$NVMF_PORT", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.248 "hdgst": ${hdgst:-false}, 00:29:08.248 "ddgst": ${ddgst:-false} 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 } 00:29:08.248 EOF 00:29:08.248 )") 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.248 { 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme$subsystem", 00:29:08.248 "trtype": "$TEST_TRANSPORT", 00:29:08.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "$NVMF_PORT", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.248 "hdgst": ${hdgst:-false}, 00:29:08.248 "ddgst": ${ddgst:-false} 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 } 00:29:08.248 EOF 00:29:08.248 )") 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:08.248 11:23:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme1", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.248 "hdgst": false, 00:29:08.248 "ddgst": false 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 },{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme2", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:08.248 "hdgst": false, 00:29:08.248 "ddgst": false 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 },{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme3", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:08.248 "hdgst": false, 00:29:08.248 "ddgst": false 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 },{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme4", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:08.248 "hdgst": false, 00:29:08.248 "ddgst": false 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 },{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme5", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:08.248 "hdgst": false, 00:29:08.248 "ddgst": false 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 },{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme6", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:08.248 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:08.248 "hdgst": false, 00:29:08.248 "ddgst": false 00:29:08.248 }, 00:29:08.248 "method": "bdev_nvme_attach_controller" 00:29:08.248 },{ 00:29:08.248 "params": { 00:29:08.248 "name": "Nvme7", 00:29:08.248 "trtype": "tcp", 00:29:08.248 "traddr": "10.0.0.2", 00:29:08.248 "adrfam": "ipv4", 00:29:08.248 "trsvcid": "4420", 00:29:08.248 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:08.249 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:08.249 "hdgst": false, 00:29:08.249 "ddgst": false 00:29:08.249 }, 00:29:08.249 "method": "bdev_nvme_attach_controller" 00:29:08.249 },{ 00:29:08.249 "params": { 00:29:08.249 "name": "Nvme8", 00:29:08.249 "trtype": "tcp", 00:29:08.249 "traddr": "10.0.0.2", 00:29:08.249 "adrfam": "ipv4", 00:29:08.249 "trsvcid": "4420", 00:29:08.249 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:08.249 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:08.249 "hdgst": false, 00:29:08.249 "ddgst": false 00:29:08.249 }, 00:29:08.249 "method": "bdev_nvme_attach_controller" 00:29:08.249 },{ 00:29:08.249 "params": { 00:29:08.249 "name": "Nvme9", 00:29:08.249 "trtype": "tcp", 00:29:08.249 "traddr": "10.0.0.2", 00:29:08.249 "adrfam": "ipv4", 00:29:08.249 "trsvcid": "4420", 00:29:08.249 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:08.249 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:08.249 "hdgst": false, 00:29:08.249 "ddgst": false 00:29:08.249 }, 00:29:08.249 "method": "bdev_nvme_attach_controller" 00:29:08.249 },{ 00:29:08.249 "params": { 00:29:08.249 "name": "Nvme10", 00:29:08.249 "trtype": "tcp", 00:29:08.249 "traddr": "10.0.0.2", 00:29:08.249 "adrfam": "ipv4", 00:29:08.249 "trsvcid": "4420", 00:29:08.249 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:08.249 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:08.249 "hdgst": false, 00:29:08.249 "ddgst": false 00:29:08.249 }, 00:29:08.249 "method": "bdev_nvme_attach_controller" 00:29:08.249 }' 00:29:08.249 [2024-11-17 11:23:32.805558] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:08.249 [2024-11-17 11:23:32.805636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325723 ] 00:29:08.249 [2024-11-17 11:23:32.878464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.506 [2024-11-17 11:23:32.925813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.876 Running I/O for 10 seconds... 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:10.442 11:23:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 325621 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 325621 ']' 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 325621 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325621 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.716 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325621' 00:29:10.716 killing process with pid 325621 00:29:10.717 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 325621 00:29:10.717 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 325621 00:29:10.717 [2024-11-17 11:23:35.217969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.218927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cf810 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.717 [2024-11-17 11:23:35.223255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.223989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.224001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.224013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfce0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.718 [2024-11-17 11:23:35.226672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.226998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.227308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d06a0 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.228986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.719 [2024-11-17 11:23:35.229298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.229820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1040 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.720 [2024-11-17 11:23:35.231780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.231988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.232170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1510 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.721 [2024-11-17 11:23:35.233761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.233995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1a00 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.234997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.722 [2024-11-17 11:23:35.235346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.235710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1ed0 is same with the state(6) to be set 00:29:10.723 [2024-11-17 11:23:35.237657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.237973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.237997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.723 [2024-11-17 11:23:35.238362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.723 [2024-11-17 11:23:35.238376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.238983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.238997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.724 [2024-11-17 11:23:35.239458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.724 [2024-11-17 11:23:35.239471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.725 [2024-11-17 11:23:35.239708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.239945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.239975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.239991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2734dc0 is same with the state(6) to be set 00:29:10.725 [2024-11-17 11:23:35.240108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x273afd0 is same with the state(6) to be set 00:29:10.725 [2024-11-17 11:23:35.240282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29743e0 is same with the state(6) to be set 00:29:10.725 [2024-11-17 11:23:35.240440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242f50 is same with the state(6) to be set 00:29:10.725 [2024-11-17 11:23:35.240624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2736810 is same with the state(6) to be set 00:29:10.725 [2024-11-17 11:23:35.240806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.240925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23020a0 is same with the state(6) to be set 00:29:10.725 [2024-11-17 11:23:35.240970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.240991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.241007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.241020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.725 [2024-11-17 11:23:35.241035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.725 [2024-11-17 11:23:35.241049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f8700 is same with the state(6) to be set 00:29:10.726 [2024-11-17 11:23:35.241146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa450 is same with the state(6) to be set 00:29:10.726 [2024-11-17 11:23:35.241313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f46f0 is same with the state(6) to be set 00:29:10.726 [2024-11-17 11:23:35.241476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.726 [2024-11-17 11:23:35.241601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x273b670 is same with the state(6) to be set 00:29:10.726 [2024-11-17 11:23:35.241852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.241881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.241928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.241960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.241976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.241991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.726 [2024-11-17 11:23:35.242345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.726 [2024-11-17 11:23:35.242366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.242810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.242826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.727 [2024-11-17 11:23:35.259779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.727 [2024-11-17 11:23:35.259795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.259825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.259855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.259885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.259915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.259945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.259975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.259994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2961970 is same with the state(6) to be set 00:29:10.728 [2024-11-17 11:23:35.260835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.260969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.260985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.728 [2024-11-17 11:23:35.261263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.728 [2024-11-17 11:23:35.261277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.261979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.261994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.729 [2024-11-17 11:23:35.262372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.729 [2024-11-17 11:23:35.262386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.262807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.262822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.730 [2024-11-17 11:23:35.264481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2734dc0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x273afd0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29743e0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242f50 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2736810 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23020a0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f8700 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fa450 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f46f0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.264763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x273b670 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.267458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:10.730 [2024-11-17 11:23:35.267509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:10.730 [2024-11-17 11:23:35.268520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:10.730 [2024-11-17 11:23:35.268721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.730 [2024-11-17 11:23:35.268753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29743e0 with addr=10.0.0.2, port=4420 00:29:10.730 [2024-11-17 11:23:35.268772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29743e0 is same with the state(6) to be set 00:29:10.730 [2024-11-17 11:23:35.268862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.730 [2024-11-17 11:23:35.268888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f46f0 with addr=10.0.0.2, port=4420 00:29:10.730 [2024-11-17 11:23:35.268906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f46f0 is same with the state(6) to be set 00:29:10.730 [2024-11-17 11:23:35.269353] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.730 [2024-11-17 11:23:35.269747] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.730 [2024-11-17 11:23:35.269817] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.730 [2024-11-17 11:23:35.269886] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.730 [2024-11-17 11:23:35.270016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.730 [2024-11-17 11:23:35.270044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2736810 with addr=10.0.0.2, port=4420 00:29:10.730 [2024-11-17 11:23:35.270061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2736810 is same with the state(6) to be set 00:29:10.730 [2024-11-17 11:23:35.270084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29743e0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.270108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f46f0 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.270172] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.730 [2024-11-17 11:23:35.270242] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.730 [2024-11-17 11:23:35.270381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2736810 (9): Bad file descriptor 00:29:10.730 [2024-11-17 11:23:35.270407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:10.730 [2024-11-17 11:23:35.270422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:10.730 [2024-11-17 11:23:35.270439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:10.730 [2024-11-17 11:23:35.270457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:10.730 [2024-11-17 11:23:35.270475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:10.730 [2024-11-17 11:23:35.270488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:10.730 [2024-11-17 11:23:35.270501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:10.730 [2024-11-17 11:23:35.270513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:10.730 [2024-11-17 11:23:35.270638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.730 [2024-11-17 11:23:35.270665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.270975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.270989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.731 [2024-11-17 11:23:35.271789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.731 [2024-11-17 11:23:35.271803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.271820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.271834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.271850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.271865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.271881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.271895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.271911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.271925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.271941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.271955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.271972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.271987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.732 [2024-11-17 11:23:35.272650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.732 [2024-11-17 11:23:35.272665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2705610 is same with the state(6) to be set 00:29:10.732 [2024-11-17 11:23:35.272793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:10.732 [2024-11-17 11:23:35.272814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:10.732 [2024-11-17 11:23:35.272828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:10.732 [2024-11-17 11:23:35.272842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:10.733 [2024-11-17 11:23:35.274061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:10.733 [2024-11-17 11:23:35.274254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.733 [2024-11-17 11:23:35.274284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2242f50 with addr=10.0.0.2, port=4420 00:29:10.733 [2024-11-17 11:23:35.274301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242f50 is same with the state(6) to be set 00:29:10.733 [2024-11-17 11:23:35.274644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242f50 (9): Bad file descriptor 00:29:10.733 [2024-11-17 11:23:35.274794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:10.733 [2024-11-17 11:23:35.274822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:10.733 [2024-11-17 11:23:35.274837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:10.733 [2024-11-17 11:23:35.274851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:10.733 [2024-11-17 11:23:35.274902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.274923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.274944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.274960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.274977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.274992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.733 [2024-11-17 11:23:35.275818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.733 [2024-11-17 11:23:35.275832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.275847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.275861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.275877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.275891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.275907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.275921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.275936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.275950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.275967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.275982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.275998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.734 [2024-11-17 11:23:35.276849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.734 [2024-11-17 11:23:35.276864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.276878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.276892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fccd0 is same with the state(6) to be set 00:29:10.735 [2024-11-17 11:23:35.278129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.278982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.278995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.279011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.279025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.279041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.279055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.279070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.735 [2024-11-17 11:23:35.279084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.735 [2024-11-17 11:23:35.279100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.279548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.279565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.736 [2024-11-17 11:23:35.290808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.736 [2024-11-17 11:23:35.290825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fde50 is same with the state(6) to be set 00:29:10.737 [2024-11-17 11:23:35.292197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.292975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.292991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.293005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.293020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.293035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.293050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.293066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.293082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.737 [2024-11-17 11:23:35.293096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.737 [2024-11-17 11:23:35.293112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.293340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.293354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2962d00 is same with the state(6) to be set 00:29:10.738 [2024-11-17 11:23:35.294483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.294970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.294984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.738 [2024-11-17 11:23:35.295229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.738 [2024-11-17 11:23:35.295245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.295973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.295987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.739 [2024-11-17 11:23:35.296384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.739 [2024-11-17 11:23:35.296400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.296414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.296430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.296445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.296460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.296474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.296489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288d350 is same with the state(6) to be set 00:29:10.740 [2024-11-17 11:23:35.297736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.297974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.297989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.740 [2024-11-17 11:23:35.298649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.740 [2024-11-17 11:23:35.298662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.298970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.298986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.299707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.299722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288e830 is same with the state(6) to be set 00:29:10.741 [2024-11-17 11:23:35.300967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.300990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.301012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.741 [2024-11-17 11:23:35.301027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.741 [2024-11-17 11:23:35.301044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.742 [2024-11-17 11:23:35.301821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.742 [2024-11-17 11:23:35.301835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.301850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.301865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.301884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.301899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.301915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.301928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.301945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.301959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.301975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.301988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.743 [2024-11-17 11:23:35.302873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.743 [2024-11-17 11:23:35.302887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.744 [2024-11-17 11:23:35.302903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-11-17 11:23:35.302918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.744 [2024-11-17 11:23:35.302934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-11-17 11:23:35.302948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.744 [2024-11-17 11:23:35.302963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x288fca0 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.304593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.304627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.304646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.304665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.304807] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:10.744 [2024-11-17 11:23:35.304839] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:10.744 [2024-11-17 11:23:35.304948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:10.744 task offset: 16512 on job bdev=Nvme10n1 fails 00:29:10.744 00:29:10.744 Latency(us) 00:29:10.744 [2024-11-17T10:23:35.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.744 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme1n1 ended in about 0.85 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme1n1 : 0.85 151.00 9.44 75.50 0.00 279265.53 24175.50 250104.79 00:29:10.744 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme2n1 ended in about 0.86 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme2n1 : 0.86 148.56 9.28 74.28 0.00 277719.42 31845.64 240784.12 00:29:10.744 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme3n1 ended in about 0.84 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme3n1 : 0.84 229.75 14.36 76.58 0.00 197181.63 20583.16 253211.69 00:29:10.744 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme4n1 ended in about 0.86 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme4n1 : 0.86 179.40 11.21 42.83 0.00 262831.72 18544.26 274959.93 00:29:10.744 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme5n1 ended in about 0.84 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme5n1 : 0.84 152.95 9.56 76.48 0.00 251261.79 22719.15 267192.70 00:29:10.744 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme6n1 ended in about 0.84 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme6n1 : 0.84 151.72 9.48 75.86 0.00 247554.91 20680.25 259425.47 00:29:10.744 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme7n1 ended in about 0.87 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme7n1 : 0.87 147.60 9.22 73.80 0.00 249495.20 21165.70 273406.48 00:29:10.744 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme8n1 ended in about 0.87 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme8n1 : 0.87 151.65 9.48 73.53 0.00 239611.36 18835.53 246997.90 00:29:10.744 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme9n1 ended in about 0.87 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme9n1 : 0.87 146.51 9.16 73.25 0.00 239829.65 21359.88 236123.78 00:29:10.744 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.744 Job: Nvme10n1 ended in about 0.83 seconds with error 00:29:10.744 Verification LBA range: start 0x0 length 0x400 00:29:10.744 Nvme10n1 : 0.83 153.47 9.59 76.73 0.00 220745.32 22524.97 284280.60 00:29:10.744 [2024-11-17T10:23:35.402Z] =================================================================================================================== 00:29:10.744 [2024-11-17T10:23:35.402Z] Total : 1612.61 100.79 718.84 0.00 244946.38 18544.26 284280.60 00:29:10.744 [2024-11-17 11:23:35.331851] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:10.744 [2024-11-17 11:23:35.331930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.332202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.744 [2024-11-17 11:23:35.332252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fa450 with addr=10.0.0.2, port=4420 00:29:10.744 [2024-11-17 11:23:35.332275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fa450 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.332376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.744 [2024-11-17 11:23:35.332404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23020a0 with addr=10.0.0.2, port=4420 00:29:10.744 [2024-11-17 11:23:35.332421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23020a0 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.332511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.744 [2024-11-17 11:23:35.332549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f8700 with addr=10.0.0.2, port=4420 00:29:10.744 [2024-11-17 11:23:35.332567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f8700 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.332646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.744 [2024-11-17 11:23:35.332672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x273b670 with addr=10.0.0.2, port=4420 00:29:10.744 [2024-11-17 11:23:35.332689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x273b670 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.334311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.334343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.334362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.334379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:10.744 [2024-11-17 11:23:35.334549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.744 [2024-11-17 11:23:35.334578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2734dc0 with addr=10.0.0.2, port=4420 00:29:10.744 [2024-11-17 11:23:35.334596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2734dc0 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.334675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.744 [2024-11-17 11:23:35.334702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x273afd0 with addr=10.0.0.2, port=4420 00:29:10.744 [2024-11-17 11:23:35.334719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x273afd0 is same with the state(6) to be set 00:29:10.744 [2024-11-17 11:23:35.334745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fa450 (9): Bad file descriptor 00:29:10.744 [2024-11-17 11:23:35.334770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23020a0 (9): Bad file descriptor 00:29:10.744 [2024-11-17 11:23:35.334789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f8700 (9): Bad file descriptor 00:29:10.744 [2024-11-17 11:23:35.334807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x273b670 (9): Bad file descriptor 00:29:10.744 [2024-11-17 11:23:35.334864] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:10.744 [2024-11-17 11:23:35.334888] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:10.744 [2024-11-17 11:23:35.334913] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:29:10.745 [2024-11-17 11:23:35.334933] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:10.745 [2024-11-17 11:23:35.335737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.745 [2024-11-17 11:23:35.335769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22f46f0 with addr=10.0.0.2, port=4420 00:29:10.745 [2024-11-17 11:23:35.335788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f46f0 is same with the state(6) to be set 00:29:10.745 [2024-11-17 11:23:35.335867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.745 [2024-11-17 11:23:35.335894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29743e0 with addr=10.0.0.2, port=4420 00:29:10.745 [2024-11-17 11:23:35.335911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29743e0 is same with the state(6) to be set 00:29:10.745 [2024-11-17 11:23:35.335986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.745 [2024-11-17 11:23:35.336012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2736810 with addr=10.0.0.2, port=4420 00:29:10.745 [2024-11-17 11:23:35.336028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2736810 is same with the state(6) to be set 00:29:10.745 [2024-11-17 11:23:35.336099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.745 [2024-11-17 11:23:35.336124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2242f50 with addr=10.0.0.2, port=4420 00:29:10.745 [2024-11-17 11:23:35.336141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2242f50 is same with the state(6) to be set 00:29:10.745 [2024-11-17 11:23:35.336160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2734dc0 (9): Bad file descriptor 00:29:10.745 [2024-11-17 11:23:35.336180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x273afd0 (9): Bad file descriptor 00:29:10.745 [2024-11-17 11:23:35.336198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f46f0 (9): Bad file descriptor 00:29:10.745 [2024-11-17 11:23:35.336563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29743e0 (9): Bad file descriptor 00:29:10.745 [2024-11-17 11:23:35.336581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2736810 (9): Bad file descriptor 00:29:10.745 [2024-11-17 11:23:35.336599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2242f50 (9): Bad file descriptor 00:29:10.745 [2024-11-17 11:23:35.336615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:10.745 [2024-11-17 11:23:35.336912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:10.745 [2024-11-17 11:23:35.336925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:10.745 [2024-11-17 11:23:35.336938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:10.745 [2024-11-17 11:23:35.336950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:11.313 11:23:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 325723 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 325723 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 325723 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.251 rmmod nvme_tcp 00:29:12.251 rmmod nvme_fabrics 00:29:12.251 rmmod nvme_keyring 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 325621 ']' 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 325621 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 325621 ']' 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 325621 00:29:12.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (325621) - No such process 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 325621 is not found' 00:29:12.251 Process with pid 325621 is not found 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.251 11:23:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.794 00:29:14.794 real 0m7.143s 00:29:14.794 user 0m17.022s 00:29:14.794 sys 0m1.412s 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.794 ************************************ 00:29:14.794 END TEST nvmf_shutdown_tc3 00:29:14.794 ************************************ 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.794 ************************************ 00:29:14.794 START TEST nvmf_shutdown_tc4 00:29:14.794 ************************************ 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.794 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.795 11:23:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.795 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.795 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.795 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.795 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.795 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:29:14.795 00:29:14.795 --- 10.0.0.2 ping statistics --- 00:29:14.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.795 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:29:14.795 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:29:14.796 00:29:14.796 --- 10.0.0.1 ping statistics --- 00:29:14.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.796 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=326582 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 326582 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 326582 ']' 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.796 [2024-11-17 11:23:39.146983] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:14.796 [2024-11-17 11:23:39.147079] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.796 [2024-11-17 11:23:39.217627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.796 [2024-11-17 11:23:39.260757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.796 [2024-11-17 11:23:39.260816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.796 [2024-11-17 11:23:39.260839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.796 [2024-11-17 11:23:39.260850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.796 [2024-11-17 11:23:39.260859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.796 [2024-11-17 11:23:39.262258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.796 [2024-11-17 11:23:39.262320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.796 [2024-11-17 11:23:39.262388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:14.796 [2024-11-17 11:23:39.262391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.796 [2024-11-17 11:23:39.409735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:14.796 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.055 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.055 Malloc1 00:29:15.055 [2024-11-17 11:23:39.509743] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.055 Malloc2 00:29:15.055 Malloc3 00:29:15.055 Malloc4 00:29:15.055 Malloc5 00:29:15.313 Malloc6 00:29:15.313 Malloc7 00:29:15.313 Malloc8 00:29:15.313 Malloc9 00:29:15.313 Malloc10 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=326761 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:15.313 11:23:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:15.571 [2024-11-17 11:23:40.018758] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 326582 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 326582 ']' 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 326582 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.849 11:23:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326582 00:29:20.849 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.849 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.849 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326582' 00:29:20.849 killing process with pid 326582 00:29:20.849 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 326582 00:29:20.849 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 326582 00:29:20.849 [2024-11-17 11:23:45.012443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.012700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d380 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.013451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d850 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.013491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d850 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.013520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d850 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.013546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d850 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.013561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d850 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.013574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181d850 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.014815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c9e0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.014861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c9e0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.014877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c9e0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.014891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c9e0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.014929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c9e0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.014942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c9e0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.019754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462b0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.019814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462b0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.019831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462b0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.019844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a462b0 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.020411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.020446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.020462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.020475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.849 [2024-11-17 11:23:45.020489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.020501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.020514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a46780 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.021920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45de0 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 [2024-11-17 11:23:45.025689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 [2024-11-17 11:23:45.026372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb280 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 [2024-11-17 11:23:45.026415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb280 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 [2024-11-17 11:23:45.026431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb280 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 [2024-11-17 11:23:45.026445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb280 is same with the state(6) to be set 00:29:20.850 starting I/O failed: -6 00:29:20.850 [2024-11-17 11:23:45.026459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb280 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 [2024-11-17 11:23:45.026472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb280 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 [2024-11-17 11:23:45.026842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.850 [2024-11-17 11:23:45.026933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb750 is same with the state(6) to be set 00:29:20.850 [2024-11-17 11:23:45.026968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb750 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 [2024-11-17 11:23:45.026985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb750 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 [2024-11-17 11:23:45.027004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb750 is same with the state(6) to be set 00:29:20.850 starting I/O failed: -6 00:29:20.850 [2024-11-17 11:23:45.027019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fb750 is same with the state(6) to be set 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.850 starting I/O failed: -6 00:29:20.850 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 [2024-11-17 11:23:45.027269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbc20 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbc20 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbc20 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 [2024-11-17 11:23:45.027337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbc20 is same with the state(6) to be set 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbc20 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 [2024-11-17 11:23:45.027362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbc20 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 [2024-11-17 11:23:45.027766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 [2024-11-17 11:23:45.027780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 [2024-11-17 11:23:45.027818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fadb0 is same with the state(6) to be set 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 [2024-11-17 11:23:45.027967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.851 starting I/O failed: -6 00:29:20.851 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 [2024-11-17 11:23:45.029597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.852 NVMe io qpair process completion error 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 [2024-11-17 11:23:45.030318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd430 is same with the state(6) to be set 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 [2024-11-17 11:23:45.030346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd430 is same with Write completed with error (sct=0, sc=8) 00:29:20.852 the state(6) to be set 00:29:20.852 [2024-11-17 11:23:45.030363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd430 is same with the state(6) to be set 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 [2024-11-17 11:23:45.030376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd430 is same with the state(6) to be set 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 [2024-11-17 11:23:45.030821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 [2024-11-17 11:23:45.031897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.852 starting I/O failed: -6 00:29:20.852 starting I/O failed: -6 00:29:20.852 starting I/O failed: -6 00:29:20.852 starting I/O failed: -6 00:29:20.852 starting I/O failed: -6 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.852 Write completed with error (sct=0, sc=8) 00:29:20.852 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 [2024-11-17 11:23:45.033313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 [2024-11-17 11:23:45.035230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.853 NVMe io qpair process completion error 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 [2024-11-17 11:23:45.036486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.853 starting I/O failed: -6 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 Write completed with error (sct=0, sc=8) 00:29:20.853 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 [2024-11-17 11:23:45.037460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 [2024-11-17 11:23:45.038674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.854 starting I/O failed: -6 00:29:20.854 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 [2024-11-17 11:23:45.041193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.855 NVMe io qpair process completion error 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 [2024-11-17 11:23:45.042514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 Write completed with error (sct=0, sc=8) 00:29:20.855 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 [2024-11-17 11:23:45.043644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 [2024-11-17 11:23:45.044749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.856 Write completed with error (sct=0, sc=8) 00:29:20.856 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 [2024-11-17 11:23:45.046442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.857 NVMe io qpair process completion error 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 [2024-11-17 11:23:45.047799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 [2024-11-17 11:23:45.048896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 starting I/O failed: -6 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.857 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 [2024-11-17 11:23:45.050002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 [2024-11-17 11:23:45.051776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.858 NVMe io qpair process completion error 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 [2024-11-17 11:23:45.053027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.858 Write completed with error (sct=0, sc=8) 00:29:20.858 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 [2024-11-17 11:23:45.053993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 [2024-11-17 11:23:45.055178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.859 starting I/O failed: -6 00:29:20.859 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 [2024-11-17 11:23:45.058893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.860 NVMe io qpair process completion error 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 [2024-11-17 11:23:45.060168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 Write completed with error (sct=0, sc=8) 00:29:20.860 starting I/O failed: -6 00:29:20.860 [2024-11-17 11:23:45.061239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 [2024-11-17 11:23:45.062323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.861 starting I/O failed: -6 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 [2024-11-17 11:23:45.066254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.861 NVMe io qpair process completion error 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 Write completed with error (sct=0, sc=8) 00:29:20.861 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.862 Write completed with error (sct=0, sc=8) 00:29:20.862 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 [2024-11-17 11:23:45.071053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.863 NVMe io qpair process completion error 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 [2024-11-17 11:23:45.072335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.863 starting I/O failed: -6 00:29:20.863 starting I/O failed: -6 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 [2024-11-17 11:23:45.073469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 Write completed with error (sct=0, sc=8) 00:29:20.863 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 [2024-11-17 11:23:45.074665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 [2024-11-17 11:23:45.076615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.864 NVMe io qpair process completion error 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 starting I/O failed: -6 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.864 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 [2024-11-17 11:23:45.077897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 [2024-11-17 11:23:45.078954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 [2024-11-17 11:23:45.080104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.865 starting I/O failed: -6 00:29:20.865 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 Write completed with error (sct=0, sc=8) 00:29:20.866 starting I/O failed: -6 00:29:20.866 [2024-11-17 11:23:45.083759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:20.866 NVMe io qpair process completion error 00:29:20.866 Initializing NVMe Controllers 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:20.866 Controller IO queue size 128, less than required. 00:29:20.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:20.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:20.866 Initialization complete. Launching workers. 00:29:20.866 ======================================================== 00:29:20.866 Latency(us) 00:29:20.866 Device Information : IOPS MiB/s Average min max 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1771.66 76.13 72270.22 903.61 122956.28 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1781.47 76.55 71896.98 1092.37 125203.54 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1818.37 78.13 70461.73 846.81 127315.06 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1757.59 75.52 72952.49 1100.65 131438.89 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1779.55 76.47 72100.47 831.89 116795.72 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1823.49 78.35 70389.72 788.69 137874.52 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1792.99 77.04 71614.59 1108.09 140582.04 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1799.18 77.31 70523.08 956.44 118371.25 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1823.06 78.33 69624.30 913.82 116693.90 00:29:20.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1873.39 80.50 67780.85 726.72 116705.66 00:29:20.866 ======================================================== 00:29:20.866 Total : 18020.76 774.33 70936.29 726.72 140582.04 00:29:20.866 00:29:20.866 [2024-11-17 11:23:45.089978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ed330 is same with the state(6) to be set 00:29:20.866 [2024-11-17 11:23:45.090088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fc040 is same with the state(6) to be set 00:29:20.866 [2024-11-17 11:23:45.090148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2519a40 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2514b40 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f7140 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250fc40 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2500f40 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24f2240 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2505e40 is same with the state(6) to be set 00:29:20.867 [2024-11-17 11:23:45.090552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250ad40 is same with the state(6) to be set 00:29:20.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:21.125 11:23:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 326761 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 326761 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 326761 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.062 rmmod nvme_tcp 00:29:22.062 rmmod nvme_fabrics 00:29:22.062 rmmod nvme_keyring 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:22.062 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 326582 ']' 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 326582 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 326582 ']' 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 326582 00:29:22.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (326582) - No such process 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 326582 is not found' 00:29:22.063 Process with pid 326582 is not found 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.063 11:23:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.971 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.971 00:29:23.971 real 0m9.717s 00:29:23.971 user 0m24.106s 00:29:23.971 sys 0m5.526s 00:29:23.971 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.971 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:23.971 ************************************ 00:29:23.971 END TEST nvmf_shutdown_tc4 00:29:23.971 ************************************ 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:24.232 00:29:24.232 real 0m36.845s 00:29:24.232 user 1m38.684s 00:29:24.232 sys 0m11.965s 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.232 ************************************ 00:29:24.232 END TEST nvmf_shutdown 00:29:24.232 ************************************ 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:24.232 ************************************ 00:29:24.232 START TEST nvmf_nsid 00:29:24.232 ************************************ 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:24.232 * Looking for test storage... 00:29:24.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.232 --rc genhtml_branch_coverage=1 00:29:24.232 --rc genhtml_function_coverage=1 00:29:24.232 --rc genhtml_legend=1 00:29:24.232 --rc geninfo_all_blocks=1 00:29:24.232 --rc geninfo_unexecuted_blocks=1 00:29:24.232 00:29:24.232 ' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.232 --rc genhtml_branch_coverage=1 00:29:24.232 --rc genhtml_function_coverage=1 00:29:24.232 --rc genhtml_legend=1 00:29:24.232 --rc geninfo_all_blocks=1 00:29:24.232 --rc geninfo_unexecuted_blocks=1 00:29:24.232 00:29:24.232 ' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.232 --rc genhtml_branch_coverage=1 00:29:24.232 --rc genhtml_function_coverage=1 00:29:24.232 --rc genhtml_legend=1 00:29:24.232 --rc geninfo_all_blocks=1 00:29:24.232 --rc geninfo_unexecuted_blocks=1 00:29:24.232 00:29:24.232 ' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.232 --rc genhtml_branch_coverage=1 00:29:24.232 --rc genhtml_function_coverage=1 00:29:24.232 --rc genhtml_legend=1 00:29:24.232 --rc geninfo_all_blocks=1 00:29:24.232 --rc geninfo_unexecuted_blocks=1 00:29:24.232 00:29:24.232 ' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.232 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.233 11:23:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:26.763 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:29:26.764 00:29:26.764 --- 10.0.0.2 ping statistics --- 00:29:26.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.764 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:29:26.764 00:29:26.764 --- 10.0.0.1 ping statistics --- 00:29:26.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.764 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.764 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=329491 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 329491 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 329491 ']' 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.765 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:26.765 [2024-11-17 11:23:51.243482] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:26.765 [2024-11-17 11:23:51.243584] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.765 [2024-11-17 11:23:51.313522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.765 [2024-11-17 11:23:51.355167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.765 [2024-11-17 11:23:51.355230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.765 [2024-11-17 11:23:51.355253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.765 [2024-11-17 11:23:51.355263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.765 [2024-11-17 11:23:51.355272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.765 [2024-11-17 11:23:51.355890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=329518 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8b177c2c-fcc8-4d5a-b7d6-4d71d1d50771 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=87efb930-b167-4fa9-b161-ff02306e7f1e 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7f1e4234-3399-4f59-801c-b066d4fff41f 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:27.024 null0 00:29:27.024 null1 00:29:27.024 null2 00:29:27.024 [2024-11-17 11:23:51.536204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.024 [2024-11-17 11:23:51.550310] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:27.024 [2024-11-17 11:23:51.550375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329518 ] 00:29:27.024 [2024-11-17 11:23:51.560414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 329518 /var/tmp/tgt2.sock 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 329518 ']' 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:27.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.024 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:27.024 [2024-11-17 11:23:51.617734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.024 [2024-11-17 11:23:51.664052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.297 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.297 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:27.297 11:23:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:27.866 [2024-11-17 11:23:52.294510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.866 [2024-11-17 11:23:52.310734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:27.866 nvme0n1 nvme0n2 00:29:27.866 nvme1n1 00:29:27.866 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:27.866 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:27.866 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:28.432 11:23:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8b177c2c-fcc8-4d5a-b7d6-4d71d1d50771 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8b177c2cfcc84d5ab7d64d71d1d50771 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8B177C2CFCC84D5AB7D64D71D1D50771 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8B177C2CFCC84D5AB7D64D71D1D50771 == \8\B\1\7\7\C\2\C\F\C\C\8\4\D\5\A\B\7\D\6\4\D\7\1\D\1\D\5\0\7\7\1 ]] 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 87efb930-b167-4fa9-b161-ff02306e7f1e 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:29.366 11:23:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=87efb930b1674fa9b161ff02306e7f1e 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 87EFB930B1674FA9B161FF02306E7F1E 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 87EFB930B1674FA9B161FF02306E7F1E == \8\7\E\F\B\9\3\0\B\1\6\7\4\F\A\9\B\1\6\1\F\F\0\2\3\0\6\E\7\F\1\E ]] 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7f1e4234-3399-4f59-801c-b066d4fff41f 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7f1e423433994f59801cb066d4fff41f 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7F1E423433994F59801CB066D4FFF41F 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7F1E423433994F59801CB066D4FFF41F == \7\F\1\E\4\2\3\4\3\3\9\9\4\F\5\9\8\0\1\C\B\0\6\6\D\4\F\F\F\4\1\F ]] 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 329518 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 329518 ']' 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 329518 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 329518 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 329518' 00:29:29.624 killing process with pid 329518 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 329518 00:29:29.624 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 329518 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.190 rmmod nvme_tcp 00:29:30.190 rmmod nvme_fabrics 00:29:30.190 rmmod nvme_keyring 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 329491 ']' 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 329491 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 329491 ']' 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 329491 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 329491 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 329491' 00:29:30.190 killing process with pid 329491 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 329491 00:29:30.190 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 329491 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.450 11:23:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.353 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.353 00:29:32.353 real 0m8.257s 00:29:32.353 user 0m7.895s 00:29:32.353 sys 0m2.709s 00:29:32.353 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.353 11:23:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:32.353 ************************************ 00:29:32.353 END TEST nvmf_nsid 00:29:32.353 ************************************ 00:29:32.353 11:23:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:32.353 00:29:32.353 real 18m7.591s 00:29:32.353 user 50m22.625s 00:29:32.353 sys 3m54.360s 00:29:32.353 11:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:32.353 11:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:32.353 ************************************ 00:29:32.353 END TEST nvmf_target_extra 00:29:32.353 ************************************ 00:29:32.612 11:23:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:32.612 11:23:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:32.612 11:23:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.612 11:23:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:32.612 ************************************ 00:29:32.612 START TEST nvmf_host 00:29:32.612 ************************************ 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:32.612 * Looking for test storage... 00:29:32.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:32.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.612 --rc genhtml_branch_coverage=1 00:29:32.612 --rc genhtml_function_coverage=1 00:29:32.612 --rc genhtml_legend=1 00:29:32.612 --rc geninfo_all_blocks=1 00:29:32.612 --rc geninfo_unexecuted_blocks=1 00:29:32.612 00:29:32.612 ' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:32.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.612 --rc genhtml_branch_coverage=1 00:29:32.612 --rc genhtml_function_coverage=1 00:29:32.612 --rc genhtml_legend=1 00:29:32.612 --rc geninfo_all_blocks=1 00:29:32.612 --rc geninfo_unexecuted_blocks=1 00:29:32.612 00:29:32.612 ' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:32.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.612 --rc genhtml_branch_coverage=1 00:29:32.612 --rc genhtml_function_coverage=1 00:29:32.612 --rc genhtml_legend=1 00:29:32.612 --rc geninfo_all_blocks=1 00:29:32.612 --rc geninfo_unexecuted_blocks=1 00:29:32.612 00:29:32.612 ' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:32.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.612 --rc genhtml_branch_coverage=1 00:29:32.612 --rc genhtml_function_coverage=1 00:29:32.612 --rc genhtml_legend=1 00:29:32.612 --rc geninfo_all_blocks=1 00:29:32.612 --rc geninfo_unexecuted_blocks=1 00:29:32.612 00:29:32.612 ' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:32.612 11:23:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:32.613 11:23:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:32.613 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:32.613 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:32.613 11:23:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.613 ************************************ 00:29:32.613 START TEST nvmf_multicontroller 00:29:32.613 ************************************ 00:29:32.613 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:32.613 * Looking for test storage... 00:29:32.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.613 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.872 --rc genhtml_branch_coverage=1 00:29:32.872 --rc genhtml_function_coverage=1 00:29:32.872 --rc genhtml_legend=1 00:29:32.872 --rc geninfo_all_blocks=1 00:29:32.872 --rc geninfo_unexecuted_blocks=1 00:29:32.872 00:29:32.872 ' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.872 --rc genhtml_branch_coverage=1 00:29:32.872 --rc genhtml_function_coverage=1 00:29:32.872 --rc genhtml_legend=1 00:29:32.872 --rc geninfo_all_blocks=1 00:29:32.872 --rc geninfo_unexecuted_blocks=1 00:29:32.872 00:29:32.872 ' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.872 --rc genhtml_branch_coverage=1 00:29:32.872 --rc genhtml_function_coverage=1 00:29:32.872 --rc genhtml_legend=1 00:29:32.872 --rc geninfo_all_blocks=1 00:29:32.872 --rc geninfo_unexecuted_blocks=1 00:29:32.872 00:29:32.872 ' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.872 --rc genhtml_branch_coverage=1 00:29:32.872 --rc genhtml_function_coverage=1 00:29:32.872 --rc genhtml_legend=1 00:29:32.872 --rc geninfo_all_blocks=1 00:29:32.872 --rc geninfo_unexecuted_blocks=1 00:29:32.872 00:29:32.872 ' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.872 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.873 11:23:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.401 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:35.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:35.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:35.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:35.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:29:35.402 00:29:35.402 --- 10.0.0.2 ping statistics --- 00:29:35.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.402 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:29:35.402 00:29:35.402 --- 10.0.0.1 ping statistics --- 00:29:35.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.402 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=331953 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 331953 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 331953 ']' 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.402 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.402 [2024-11-17 11:23:59.718891] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:35.402 [2024-11-17 11:23:59.718977] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.402 [2024-11-17 11:23:59.792472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:35.402 [2024-11-17 11:23:59.838902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.402 [2024-11-17 11:23:59.838955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.403 [2024-11-17 11:23:59.838978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.403 [2024-11-17 11:23:59.838990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.403 [2024-11-17 11:23:59.838999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.403 [2024-11-17 11:23:59.840342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.403 [2024-11-17 11:23:59.840405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.403 [2024-11-17 11:23:59.840408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 [2024-11-17 11:23:59.972211] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:23:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 Malloc0 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 [2024-11-17 11:24:00.038310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.403 [2024-11-17 11:24:00.046185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.403 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.661 Malloc1 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=331996 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 331996 /var/tmp/bdevperf.sock 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 331996 ']' 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.661 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.920 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.920 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:35.920 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:35.920 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.920 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.177 NVMe0n1 00:29:36.177 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.178 1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.178 request: 00:29:36.178 { 00:29:36.178 "name": "NVMe0", 00:29:36.178 "trtype": "tcp", 00:29:36.178 "traddr": "10.0.0.2", 00:29:36.178 "adrfam": "ipv4", 00:29:36.178 "trsvcid": "4420", 00:29:36.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.178 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:36.178 "hostaddr": "10.0.0.1", 00:29:36.178 "prchk_reftag": false, 00:29:36.178 "prchk_guard": false, 00:29:36.178 "hdgst": false, 00:29:36.178 "ddgst": false, 00:29:36.178 "allow_unrecognized_csi": false, 00:29:36.178 "method": "bdev_nvme_attach_controller", 00:29:36.178 "req_id": 1 00:29:36.178 } 00:29:36.178 Got JSON-RPC error response 00:29:36.178 response: 00:29:36.178 { 00:29:36.178 "code": -114, 00:29:36.178 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.178 } 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.178 request: 00:29:36.178 { 00:29:36.178 "name": "NVMe0", 00:29:36.178 "trtype": "tcp", 00:29:36.178 "traddr": "10.0.0.2", 00:29:36.178 "adrfam": "ipv4", 00:29:36.178 "trsvcid": "4420", 00:29:36.178 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:36.178 "hostaddr": "10.0.0.1", 00:29:36.178 "prchk_reftag": false, 00:29:36.178 "prchk_guard": false, 00:29:36.178 "hdgst": false, 00:29:36.178 "ddgst": false, 00:29:36.178 "allow_unrecognized_csi": false, 00:29:36.178 "method": "bdev_nvme_attach_controller", 00:29:36.178 "req_id": 1 00:29:36.178 } 00:29:36.178 Got JSON-RPC error response 00:29:36.178 response: 00:29:36.178 { 00:29:36.178 "code": -114, 00:29:36.178 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.178 } 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.178 request: 00:29:36.178 { 00:29:36.178 "name": "NVMe0", 00:29:36.178 "trtype": "tcp", 00:29:36.178 "traddr": "10.0.0.2", 00:29:36.178 "adrfam": "ipv4", 00:29:36.178 "trsvcid": "4420", 00:29:36.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.178 "hostaddr": "10.0.0.1", 00:29:36.178 "prchk_reftag": false, 00:29:36.178 "prchk_guard": false, 00:29:36.178 "hdgst": false, 00:29:36.178 "ddgst": false, 00:29:36.178 "multipath": "disable", 00:29:36.178 "allow_unrecognized_csi": false, 00:29:36.178 "method": "bdev_nvme_attach_controller", 00:29:36.178 "req_id": 1 00:29:36.178 } 00:29:36.178 Got JSON-RPC error response 00:29:36.178 response: 00:29:36.178 { 00:29:36.178 "code": -114, 00:29:36.178 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:36.178 } 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.178 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.178 request: 00:29:36.178 { 00:29:36.178 "name": "NVMe0", 00:29:36.178 "trtype": "tcp", 00:29:36.178 "traddr": "10.0.0.2", 00:29:36.178 "adrfam": "ipv4", 00:29:36.178 "trsvcid": "4420", 00:29:36.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.178 "hostaddr": "10.0.0.1", 00:29:36.178 "prchk_reftag": false, 00:29:36.178 "prchk_guard": false, 00:29:36.178 "hdgst": false, 00:29:36.178 "ddgst": false, 00:29:36.178 "multipath": "failover", 00:29:36.178 "allow_unrecognized_csi": false, 00:29:36.178 "method": "bdev_nvme_attach_controller", 00:29:36.178 "req_id": 1 00:29:36.178 } 00:29:36.178 Got JSON-RPC error response 00:29:36.178 response: 00:29:36.178 { 00:29:36.179 "code": -114, 00:29:36.179 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:36.179 } 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.179 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.436 NVMe0n1 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.436 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:36.436 11:24:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:37.815 { 00:29:37.816 "results": [ 00:29:37.816 { 00:29:37.816 "job": "NVMe0n1", 00:29:37.816 "core_mask": "0x1", 00:29:37.816 "workload": "write", 00:29:37.816 "status": "finished", 00:29:37.816 "queue_depth": 128, 00:29:37.816 "io_size": 4096, 00:29:37.816 "runtime": 1.003532, 00:29:37.816 "iops": 18478.733114639093, 00:29:37.816 "mibps": 72.18255122905896, 00:29:37.816 "io_failed": 0, 00:29:37.816 "io_timeout": 0, 00:29:37.816 "avg_latency_us": 6916.254313744287, 00:29:37.816 "min_latency_us": 4490.42962962963, 00:29:37.816 "max_latency_us": 12379.022222222222 00:29:37.816 } 00:29:37.816 ], 00:29:37.816 "core_count": 1 00:29:37.816 } 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 331996 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 331996 ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 331996 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331996 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331996' 00:29:37.816 killing process with pid 331996 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 331996 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 331996 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:37.816 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:37.816 [2024-11-17 11:24:00.150188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:37.816 [2024-11-17 11:24:00.150278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331996 ] 00:29:37.816 [2024-11-17 11:24:00.222043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.816 [2024-11-17 11:24:00.269948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.816 [2024-11-17 11:24:00.956891] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name ab24660a-78c6-4546-872a-f9b107af08e4 already exists 00:29:37.816 [2024-11-17 11:24:00.956933] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:ab24660a-78c6-4546-872a-f9b107af08e4 alias for bdev NVMe1n1 00:29:37.816 [2024-11-17 11:24:00.956948] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:37.816 Running I/O for 1 seconds... 00:29:37.816 18416.00 IOPS, 71.94 MiB/s 00:29:37.816 Latency(us) 00:29:37.816 [2024-11-17T10:24:02.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.816 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:37.816 NVMe0n1 : 1.00 18478.73 72.18 0.00 0.00 6916.25 4490.43 12379.02 00:29:37.816 [2024-11-17T10:24:02.474Z] =================================================================================================================== 00:29:37.816 [2024-11-17T10:24:02.474Z] Total : 18478.73 72.18 0.00 0.00 6916.25 4490.43 12379.02 00:29:37.816 Received shutdown signal, test time was about 1.000000 seconds 00:29:37.816 00:29:37.816 Latency(us) 00:29:37.816 [2024-11-17T10:24:02.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.816 [2024-11-17T10:24:02.474Z] =================================================================================================================== 00:29:37.816 [2024-11-17T10:24:02.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.816 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.816 rmmod nvme_tcp 00:29:37.816 rmmod nvme_fabrics 00:29:37.816 rmmod nvme_keyring 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 331953 ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 331953 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 331953 ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 331953 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.816 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331953 00:29:38.075 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:38.075 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:38.075 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331953' 00:29:38.075 killing process with pid 331953 00:29:38.075 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 331953 00:29:38.075 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 331953 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.335 11:24:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.239 11:24:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.239 00:29:40.239 real 0m7.562s 00:29:40.239 user 0m11.737s 00:29:40.239 sys 0m2.422s 00:29:40.239 11:24:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.239 11:24:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:40.239 ************************************ 00:29:40.239 END TEST nvmf_multicontroller 00:29:40.239 ************************************ 00:29:40.239 11:24:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:40.239 11:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:40.240 11:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.240 11:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.240 ************************************ 00:29:40.240 START TEST nvmf_aer 00:29:40.240 ************************************ 00:29:40.240 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:40.240 * Looking for test storage... 00:29:40.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:40.240 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:40.498 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:29:40.498 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:40.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.499 --rc genhtml_branch_coverage=1 00:29:40.499 --rc genhtml_function_coverage=1 00:29:40.499 --rc genhtml_legend=1 00:29:40.499 --rc geninfo_all_blocks=1 00:29:40.499 --rc geninfo_unexecuted_blocks=1 00:29:40.499 00:29:40.499 ' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:40.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.499 --rc genhtml_branch_coverage=1 00:29:40.499 --rc genhtml_function_coverage=1 00:29:40.499 --rc genhtml_legend=1 00:29:40.499 --rc geninfo_all_blocks=1 00:29:40.499 --rc geninfo_unexecuted_blocks=1 00:29:40.499 00:29:40.499 ' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:40.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.499 --rc genhtml_branch_coverage=1 00:29:40.499 --rc genhtml_function_coverage=1 00:29:40.499 --rc genhtml_legend=1 00:29:40.499 --rc geninfo_all_blocks=1 00:29:40.499 --rc geninfo_unexecuted_blocks=1 00:29:40.499 00:29:40.499 ' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:40.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.499 --rc genhtml_branch_coverage=1 00:29:40.499 --rc genhtml_function_coverage=1 00:29:40.499 --rc genhtml_legend=1 00:29:40.499 --rc geninfo_all_blocks=1 00:29:40.499 --rc geninfo_unexecuted_blocks=1 00:29:40.499 00:29:40.499 ' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:40.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.499 11:24:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.499 11:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.499 11:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.499 11:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.499 11:24:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:42.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:42.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:42.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:42.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.402 11:24:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:29:42.402 00:29:42.402 --- 10.0.0.2 ping statistics --- 00:29:42.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.402 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:29:42.402 00:29:42.402 --- 10.0.0.1 ping statistics --- 00:29:42.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.402 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.402 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=334309 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 334309 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 334309 ']' 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.403 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.661 [2024-11-17 11:24:07.095588] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:42.661 [2024-11-17 11:24:07.095677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.661 [2024-11-17 11:24:07.179857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.661 [2024-11-17 11:24:07.228316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.661 [2024-11-17 11:24:07.228379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.661 [2024-11-17 11:24:07.228392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.661 [2024-11-17 11:24:07.228403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.661 [2024-11-17 11:24:07.228423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.661 [2024-11-17 11:24:07.229988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.661 [2024-11-17 11:24:07.230072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.661 [2024-11-17 11:24:07.230017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.661 [2024-11-17 11:24:07.230074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 [2024-11-17 11:24:07.365948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 Malloc0 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 [2024-11-17 11:24:07.434390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:42.920 [ 00:29:42.920 { 00:29:42.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.920 "subtype": "Discovery", 00:29:42.920 "listen_addresses": [], 00:29:42.920 "allow_any_host": true, 00:29:42.920 "hosts": [] 00:29:42.920 }, 00:29:42.920 { 00:29:42.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.920 "subtype": "NVMe", 00:29:42.920 "listen_addresses": [ 00:29:42.920 { 00:29:42.920 "trtype": "TCP", 00:29:42.920 "adrfam": "IPv4", 00:29:42.920 "traddr": "10.0.0.2", 00:29:42.920 "trsvcid": "4420" 00:29:42.920 } 00:29:42.920 ], 00:29:42.920 "allow_any_host": true, 00:29:42.920 "hosts": [], 00:29:42.920 "serial_number": "SPDK00000000000001", 00:29:42.920 "model_number": "SPDK bdev Controller", 00:29:42.920 "max_namespaces": 2, 00:29:42.920 "min_cntlid": 1, 00:29:42.920 "max_cntlid": 65519, 00:29:42.920 "namespaces": [ 00:29:42.920 { 00:29:42.920 "nsid": 1, 00:29:42.920 "bdev_name": "Malloc0", 00:29:42.920 "name": "Malloc0", 00:29:42.920 "nguid": "320F62851F144452ACB740D140292036", 00:29:42.920 "uuid": "320f6285-1f14-4452-acb7-40d140292036" 00:29:42.920 } 00:29:42.920 ] 00:29:42.920 } 00:29:42.920 ] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=334640 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:42.920 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:42.921 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:42.921 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:42.921 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.179 Malloc1 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.179 [ 00:29:43.179 { 00:29:43.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:43.179 "subtype": "Discovery", 00:29:43.179 "listen_addresses": [], 00:29:43.179 "allow_any_host": true, 00:29:43.179 "hosts": [] 00:29:43.179 }, 00:29:43.179 { 00:29:43.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.179 "subtype": "NVMe", 00:29:43.179 "listen_addresses": [ 00:29:43.179 { 00:29:43.179 "trtype": "TCP", 00:29:43.179 "adrfam": "IPv4", 00:29:43.179 "traddr": "10.0.0.2", 00:29:43.179 "trsvcid": "4420" 00:29:43.179 } 00:29:43.179 ], 00:29:43.179 "allow_any_host": true, 00:29:43.179 "hosts": [], 00:29:43.179 "serial_number": "SPDK00000000000001", 00:29:43.179 "model_number": "SPDK bdev Controller", 00:29:43.179 "max_namespaces": 2, 00:29:43.179 "min_cntlid": 1, 00:29:43.179 "max_cntlid": 65519, 00:29:43.179 "namespaces": [ 00:29:43.179 { 00:29:43.179 "nsid": 1, 00:29:43.179 "bdev_name": "Malloc0", 00:29:43.179 "name": "Malloc0", 00:29:43.179 "nguid": "320F62851F144452ACB740D140292036", 00:29:43.179 "uuid": "320f6285-1f14-4452-acb7-40d140292036" 00:29:43.179 }, 00:29:43.179 { 00:29:43.179 "nsid": 2, 00:29:43.179 "bdev_name": "Malloc1", 00:29:43.179 "name": "Malloc1", 00:29:43.179 "nguid": "2B74099CBB734F7E9FFD427393129237", 00:29:43.179 "uuid": "2b74099c-bb73-4f7e-9ffd-427393129237" 00:29:43.179 } 00:29:43.179 ] 00:29:43.179 } 00:29:43.179 ] 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 334640 00:29:43.179 Asynchronous Event Request test 00:29:43.179 Attaching to 10.0.0.2 00:29:43.179 Attached to 10.0.0.2 00:29:43.179 Registering asynchronous event callbacks... 00:29:43.179 Starting namespace attribute notice tests for all controllers... 00:29:43.179 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:43.179 aer_cb - Changed Namespace 00:29:43.179 Cleaning up... 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.179 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.180 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.180 rmmod nvme_tcp 00:29:43.180 rmmod nvme_fabrics 00:29:43.438 rmmod nvme_keyring 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 334309 ']' 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 334309 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 334309 ']' 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 334309 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334309 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334309' 00:29:43.438 killing process with pid 334309 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 334309 00:29:43.438 11:24:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 334309 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.438 11:24:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.973 00:29:45.973 real 0m5.300s 00:29:45.973 user 0m4.189s 00:29:45.973 sys 0m1.856s 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:45.973 ************************************ 00:29:45.973 END TEST nvmf_aer 00:29:45.973 ************************************ 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.973 ************************************ 00:29:45.973 START TEST nvmf_async_init 00:29:45.973 ************************************ 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:45.973 * Looking for test storage... 00:29:45.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.973 --rc genhtml_branch_coverage=1 00:29:45.973 --rc genhtml_function_coverage=1 00:29:45.973 --rc genhtml_legend=1 00:29:45.973 --rc geninfo_all_blocks=1 00:29:45.973 --rc geninfo_unexecuted_blocks=1 00:29:45.973 00:29:45.973 ' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.973 --rc genhtml_branch_coverage=1 00:29:45.973 --rc genhtml_function_coverage=1 00:29:45.973 --rc genhtml_legend=1 00:29:45.973 --rc geninfo_all_blocks=1 00:29:45.973 --rc geninfo_unexecuted_blocks=1 00:29:45.973 00:29:45.973 ' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.973 --rc genhtml_branch_coverage=1 00:29:45.973 --rc genhtml_function_coverage=1 00:29:45.973 --rc genhtml_legend=1 00:29:45.973 --rc geninfo_all_blocks=1 00:29:45.973 --rc geninfo_unexecuted_blocks=1 00:29:45.973 00:29:45.973 ' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:45.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.973 --rc genhtml_branch_coverage=1 00:29:45.973 --rc genhtml_function_coverage=1 00:29:45.973 --rc genhtml_legend=1 00:29:45.973 --rc geninfo_all_blocks=1 00:29:45.973 --rc geninfo_unexecuted_blocks=1 00:29:45.973 00:29:45.973 ' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.973 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a29c55badabb46c38e40f4bf4143231d 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.974 11:24:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:48.510 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:48.510 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:48.510 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:48.511 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:48.511 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:29:48.511 00:29:48.511 --- 10.0.0.2 ping statistics --- 00:29:48.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.511 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:29:48.511 00:29:48.511 --- 10.0.0.1 ping statistics --- 00:29:48.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.511 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=336900 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 336900 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 336900 ']' 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.511 11:24:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.511 [2024-11-17 11:24:12.777640] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:48.511 [2024-11-17 11:24:12.777731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.511 [2024-11-17 11:24:12.850409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.511 [2024-11-17 11:24:12.896604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.511 [2024-11-17 11:24:12.896658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.511 [2024-11-17 11:24:12.896672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.511 [2024-11-17 11:24:12.896684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.511 [2024-11-17 11:24:12.896694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.511 [2024-11-17 11:24:12.897272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.511 [2024-11-17 11:24:13.032576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.511 null0 00:29:48.511 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a29c55badabb46c38e40f4bf4143231d 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.512 [2024-11-17 11:24:13.072850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.512 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.770 nvme0n1 00:29:48.770 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.770 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:48.770 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.770 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.770 [ 00:29:48.770 { 00:29:48.770 "name": "nvme0n1", 00:29:48.770 "aliases": [ 00:29:48.770 "a29c55ba-dabb-46c3-8e40-f4bf4143231d" 00:29:48.770 ], 00:29:48.770 "product_name": "NVMe disk", 00:29:48.770 "block_size": 512, 00:29:48.770 "num_blocks": 2097152, 00:29:48.770 "uuid": "a29c55ba-dabb-46c3-8e40-f4bf4143231d", 00:29:48.770 "numa_id": 0, 00:29:48.770 "assigned_rate_limits": { 00:29:48.770 "rw_ios_per_sec": 0, 00:29:48.770 "rw_mbytes_per_sec": 0, 00:29:48.770 "r_mbytes_per_sec": 0, 00:29:48.770 "w_mbytes_per_sec": 0 00:29:48.770 }, 00:29:48.770 "claimed": false, 00:29:48.770 "zoned": false, 00:29:48.770 "supported_io_types": { 00:29:48.770 "read": true, 00:29:48.770 "write": true, 00:29:48.770 "unmap": false, 00:29:48.770 "flush": true, 00:29:48.770 "reset": true, 00:29:48.770 "nvme_admin": true, 00:29:48.770 "nvme_io": true, 00:29:48.770 "nvme_io_md": false, 00:29:48.770 "write_zeroes": true, 00:29:48.770 "zcopy": false, 00:29:48.770 "get_zone_info": false, 00:29:48.770 "zone_management": false, 00:29:48.770 "zone_append": false, 00:29:48.770 "compare": true, 00:29:48.770 "compare_and_write": true, 00:29:48.770 "abort": true, 00:29:48.770 "seek_hole": false, 00:29:48.770 "seek_data": false, 00:29:48.770 "copy": true, 00:29:48.770 "nvme_iov_md": false 00:29:48.770 }, 00:29:48.770 "memory_domains": [ 00:29:48.770 { 00:29:48.770 "dma_device_id": "system", 00:29:48.770 "dma_device_type": 1 00:29:48.770 } 00:29:48.770 ], 00:29:48.770 "driver_specific": { 00:29:48.770 "nvme": [ 00:29:48.770 { 00:29:48.770 "trid": { 00:29:48.770 "trtype": "TCP", 00:29:48.770 "adrfam": "IPv4", 00:29:48.770 "traddr": "10.0.0.2", 00:29:48.770 "trsvcid": "4420", 00:29:48.770 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:48.770 }, 00:29:48.770 "ctrlr_data": { 00:29:48.770 "cntlid": 1, 00:29:48.770 "vendor_id": "0x8086", 00:29:48.770 "model_number": "SPDK bdev Controller", 00:29:48.771 "serial_number": "00000000000000000000", 00:29:48.771 "firmware_revision": "25.01", 00:29:48.771 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.771 "oacs": { 00:29:48.771 "security": 0, 00:29:48.771 "format": 0, 00:29:48.771 "firmware": 0, 00:29:48.771 "ns_manage": 0 00:29:48.771 }, 00:29:48.771 "multi_ctrlr": true, 00:29:48.771 "ana_reporting": false 00:29:48.771 }, 00:29:48.771 "vs": { 00:29:48.771 "nvme_version": "1.3" 00:29:48.771 }, 00:29:48.771 "ns_data": { 00:29:48.771 "id": 1, 00:29:48.771 "can_share": true 00:29:48.771 } 00:29:48.771 } 00:29:48.771 ], 00:29:48.771 "mp_policy": "active_passive" 00:29:48.771 } 00:29:48.771 } 00:29:48.771 ] 00:29:48.771 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.771 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:48.771 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.771 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:48.771 [2024-11-17 11:24:13.321928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.771 [2024-11-17 11:24:13.322004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc82700 (9): Bad file descriptor 00:29:49.030 [2024-11-17 11:24:13.453666] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 [ 00:29:49.030 { 00:29:49.030 "name": "nvme0n1", 00:29:49.030 "aliases": [ 00:29:49.030 "a29c55ba-dabb-46c3-8e40-f4bf4143231d" 00:29:49.030 ], 00:29:49.030 "product_name": "NVMe disk", 00:29:49.030 "block_size": 512, 00:29:49.030 "num_blocks": 2097152, 00:29:49.030 "uuid": "a29c55ba-dabb-46c3-8e40-f4bf4143231d", 00:29:49.030 "numa_id": 0, 00:29:49.030 "assigned_rate_limits": { 00:29:49.030 "rw_ios_per_sec": 0, 00:29:49.030 "rw_mbytes_per_sec": 0, 00:29:49.030 "r_mbytes_per_sec": 0, 00:29:49.030 "w_mbytes_per_sec": 0 00:29:49.030 }, 00:29:49.030 "claimed": false, 00:29:49.030 "zoned": false, 00:29:49.030 "supported_io_types": { 00:29:49.030 "read": true, 00:29:49.030 "write": true, 00:29:49.030 "unmap": false, 00:29:49.030 "flush": true, 00:29:49.030 "reset": true, 00:29:49.030 "nvme_admin": true, 00:29:49.030 "nvme_io": true, 00:29:49.030 "nvme_io_md": false, 00:29:49.030 "write_zeroes": true, 00:29:49.030 "zcopy": false, 00:29:49.030 "get_zone_info": false, 00:29:49.030 "zone_management": false, 00:29:49.030 "zone_append": false, 00:29:49.030 "compare": true, 00:29:49.030 "compare_and_write": true, 00:29:49.030 "abort": true, 00:29:49.030 "seek_hole": false, 00:29:49.030 "seek_data": false, 00:29:49.030 "copy": true, 00:29:49.030 "nvme_iov_md": false 00:29:49.030 }, 00:29:49.030 "memory_domains": [ 00:29:49.030 { 00:29:49.030 "dma_device_id": "system", 00:29:49.030 "dma_device_type": 1 00:29:49.030 } 00:29:49.030 ], 00:29:49.030 "driver_specific": { 00:29:49.030 "nvme": [ 00:29:49.030 { 00:29:49.030 "trid": { 00:29:49.030 "trtype": "TCP", 00:29:49.030 "adrfam": "IPv4", 00:29:49.030 "traddr": "10.0.0.2", 00:29:49.030 "trsvcid": "4420", 00:29:49.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:49.030 }, 00:29:49.030 "ctrlr_data": { 00:29:49.030 "cntlid": 2, 00:29:49.030 "vendor_id": "0x8086", 00:29:49.030 "model_number": "SPDK bdev Controller", 00:29:49.030 "serial_number": "00000000000000000000", 00:29:49.030 "firmware_revision": "25.01", 00:29:49.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.030 "oacs": { 00:29:49.030 "security": 0, 00:29:49.030 "format": 0, 00:29:49.030 "firmware": 0, 00:29:49.030 "ns_manage": 0 00:29:49.030 }, 00:29:49.030 "multi_ctrlr": true, 00:29:49.030 "ana_reporting": false 00:29:49.030 }, 00:29:49.030 "vs": { 00:29:49.030 "nvme_version": "1.3" 00:29:49.030 }, 00:29:49.030 "ns_data": { 00:29:49.030 "id": 1, 00:29:49.030 "can_share": true 00:29:49.030 } 00:29:49.030 } 00:29:49.030 ], 00:29:49.030 "mp_policy": "active_passive" 00:29:49.030 } 00:29:49.030 } 00:29:49.030 ] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dazOkVnO6e 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dazOkVnO6e 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.dazOkVnO6e 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 [2024-11-17 11:24:13.506571] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:49.030 [2024-11-17 11:24:13.506721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 [2024-11-17 11:24:13.522612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:49.030 nvme0n1 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.030 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.030 [ 00:29:49.030 { 00:29:49.030 "name": "nvme0n1", 00:29:49.030 "aliases": [ 00:29:49.030 "a29c55ba-dabb-46c3-8e40-f4bf4143231d" 00:29:49.030 ], 00:29:49.030 "product_name": "NVMe disk", 00:29:49.030 "block_size": 512, 00:29:49.030 "num_blocks": 2097152, 00:29:49.030 "uuid": "a29c55ba-dabb-46c3-8e40-f4bf4143231d", 00:29:49.030 "numa_id": 0, 00:29:49.030 "assigned_rate_limits": { 00:29:49.030 "rw_ios_per_sec": 0, 00:29:49.030 "rw_mbytes_per_sec": 0, 00:29:49.030 "r_mbytes_per_sec": 0, 00:29:49.030 "w_mbytes_per_sec": 0 00:29:49.031 }, 00:29:49.031 "claimed": false, 00:29:49.031 "zoned": false, 00:29:49.031 "supported_io_types": { 00:29:49.031 "read": true, 00:29:49.031 "write": true, 00:29:49.031 "unmap": false, 00:29:49.031 "flush": true, 00:29:49.031 "reset": true, 00:29:49.031 "nvme_admin": true, 00:29:49.031 "nvme_io": true, 00:29:49.031 "nvme_io_md": false, 00:29:49.031 "write_zeroes": true, 00:29:49.031 "zcopy": false, 00:29:49.031 "get_zone_info": false, 00:29:49.031 "zone_management": false, 00:29:49.031 "zone_append": false, 00:29:49.031 "compare": true, 00:29:49.031 "compare_and_write": true, 00:29:49.031 "abort": true, 00:29:49.031 "seek_hole": false, 00:29:49.031 "seek_data": false, 00:29:49.031 "copy": true, 00:29:49.031 "nvme_iov_md": false 00:29:49.031 }, 00:29:49.031 "memory_domains": [ 00:29:49.031 { 00:29:49.031 "dma_device_id": "system", 00:29:49.031 "dma_device_type": 1 00:29:49.031 } 00:29:49.031 ], 00:29:49.031 "driver_specific": { 00:29:49.031 "nvme": [ 00:29:49.031 { 00:29:49.031 "trid": { 00:29:49.031 "trtype": "TCP", 00:29:49.031 "adrfam": "IPv4", 00:29:49.031 "traddr": "10.0.0.2", 00:29:49.031 "trsvcid": "4421", 00:29:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:49.031 }, 00:29:49.031 "ctrlr_data": { 00:29:49.031 "cntlid": 3, 00:29:49.031 "vendor_id": "0x8086", 00:29:49.031 "model_number": "SPDK bdev Controller", 00:29:49.031 "serial_number": "00000000000000000000", 00:29:49.031 "firmware_revision": "25.01", 00:29:49.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.031 "oacs": { 00:29:49.031 "security": 0, 00:29:49.031 "format": 0, 00:29:49.031 "firmware": 0, 00:29:49.031 "ns_manage": 0 00:29:49.031 }, 00:29:49.031 "multi_ctrlr": true, 00:29:49.031 "ana_reporting": false 00:29:49.031 }, 00:29:49.031 "vs": { 00:29:49.031 "nvme_version": "1.3" 00:29:49.031 }, 00:29:49.031 "ns_data": { 00:29:49.031 "id": 1, 00:29:49.031 "can_share": true 00:29:49.031 } 00:29:49.031 } 00:29:49.031 ], 00:29:49.031 "mp_policy": "active_passive" 00:29:49.031 } 00:29:49.031 } 00:29:49.031 ] 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.dazOkVnO6e 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.031 rmmod nvme_tcp 00:29:49.031 rmmod nvme_fabrics 00:29:49.031 rmmod nvme_keyring 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 336900 ']' 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 336900 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 336900 ']' 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 336900 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.031 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336900 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336900' 00:29:49.290 killing process with pid 336900 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 336900 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 336900 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.290 11:24:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.829 00:29:51.829 real 0m5.717s 00:29:51.829 user 0m2.154s 00:29:51.829 sys 0m1.985s 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:51.829 ************************************ 00:29:51.829 END TEST nvmf_async_init 00:29:51.829 ************************************ 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.829 ************************************ 00:29:51.829 START TEST dma 00:29:51.829 ************************************ 00:29:51.829 11:24:15 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:51.829 * Looking for test storage... 00:29:51.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.829 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.829 --rc genhtml_branch_coverage=1 00:29:51.829 --rc genhtml_function_coverage=1 00:29:51.830 --rc genhtml_legend=1 00:29:51.830 --rc geninfo_all_blocks=1 00:29:51.830 --rc geninfo_unexecuted_blocks=1 00:29:51.830 00:29:51.830 ' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.830 --rc genhtml_branch_coverage=1 00:29:51.830 --rc genhtml_function_coverage=1 00:29:51.830 --rc genhtml_legend=1 00:29:51.830 --rc geninfo_all_blocks=1 00:29:51.830 --rc geninfo_unexecuted_blocks=1 00:29:51.830 00:29:51.830 ' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.830 --rc genhtml_branch_coverage=1 00:29:51.830 --rc genhtml_function_coverage=1 00:29:51.830 --rc genhtml_legend=1 00:29:51.830 --rc geninfo_all_blocks=1 00:29:51.830 --rc geninfo_unexecuted_blocks=1 00:29:51.830 00:29:51.830 ' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.830 --rc genhtml_branch_coverage=1 00:29:51.830 --rc genhtml_function_coverage=1 00:29:51.830 --rc genhtml_legend=1 00:29:51.830 --rc geninfo_all_blocks=1 00:29:51.830 --rc geninfo_unexecuted_blocks=1 00:29:51.830 00:29:51.830 ' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:51.830 00:29:51.830 real 0m0.173s 00:29:51.830 user 0m0.121s 00:29:51.830 sys 0m0.063s 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:51.830 ************************************ 00:29:51.830 END TEST dma 00:29:51.830 ************************************ 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.830 ************************************ 00:29:51.830 START TEST nvmf_identify 00:29:51.830 ************************************ 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:51.830 * Looking for test storage... 00:29:51.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.830 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.831 --rc genhtml_branch_coverage=1 00:29:51.831 --rc genhtml_function_coverage=1 00:29:51.831 --rc genhtml_legend=1 00:29:51.831 --rc geninfo_all_blocks=1 00:29:51.831 --rc geninfo_unexecuted_blocks=1 00:29:51.831 00:29:51.831 ' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.831 --rc genhtml_branch_coverage=1 00:29:51.831 --rc genhtml_function_coverage=1 00:29:51.831 --rc genhtml_legend=1 00:29:51.831 --rc geninfo_all_blocks=1 00:29:51.831 --rc geninfo_unexecuted_blocks=1 00:29:51.831 00:29:51.831 ' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.831 --rc genhtml_branch_coverage=1 00:29:51.831 --rc genhtml_function_coverage=1 00:29:51.831 --rc genhtml_legend=1 00:29:51.831 --rc geninfo_all_blocks=1 00:29:51.831 --rc geninfo_unexecuted_blocks=1 00:29:51.831 00:29:51.831 ' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.831 --rc genhtml_branch_coverage=1 00:29:51.831 --rc genhtml_function_coverage=1 00:29:51.831 --rc genhtml_legend=1 00:29:51.831 --rc geninfo_all_blocks=1 00:29:51.831 --rc geninfo_unexecuted_blocks=1 00:29:51.831 00:29:51.831 ' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.831 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.832 11:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:54.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:54.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:54.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:54.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:54.368 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:54.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:29:54.368 00:29:54.369 --- 10.0.0.2 ping statistics --- 00:29:54.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.369 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:54.369 00:29:54.369 --- 10.0.0.1 ping statistics --- 00:29:54.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.369 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=339156 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 339156 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 339156 ']' 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.369 11:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.369 [2024-11-17 11:24:18.777553] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:54.369 [2024-11-17 11:24:18.777646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.369 [2024-11-17 11:24:18.848521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.369 [2024-11-17 11:24:18.893069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.369 [2024-11-17 11:24:18.893130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.369 [2024-11-17 11:24:18.893153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.369 [2024-11-17 11:24:18.893168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.369 [2024-11-17 11:24:18.893177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.369 [2024-11-17 11:24:18.894647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.369 [2024-11-17 11:24:18.894711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.369 [2024-11-17 11:24:18.894777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.369 [2024-11-17 11:24:18.894780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.369 [2024-11-17 11:24:19.012650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.369 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 Malloc0 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 [2024-11-17 11:24:19.095715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:54.631 [ 00:29:54.631 { 00:29:54.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:54.631 "subtype": "Discovery", 00:29:54.631 "listen_addresses": [ 00:29:54.631 { 00:29:54.631 "trtype": "TCP", 00:29:54.631 "adrfam": "IPv4", 00:29:54.631 "traddr": "10.0.0.2", 00:29:54.631 "trsvcid": "4420" 00:29:54.631 } 00:29:54.631 ], 00:29:54.631 "allow_any_host": true, 00:29:54.631 "hosts": [] 00:29:54.631 }, 00:29:54.631 { 00:29:54.631 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.631 "subtype": "NVMe", 00:29:54.631 "listen_addresses": [ 00:29:54.631 { 00:29:54.631 "trtype": "TCP", 00:29:54.631 "adrfam": "IPv4", 00:29:54.631 "traddr": "10.0.0.2", 00:29:54.631 "trsvcid": "4420" 00:29:54.631 } 00:29:54.631 ], 00:29:54.631 "allow_any_host": true, 00:29:54.631 "hosts": [], 00:29:54.631 "serial_number": "SPDK00000000000001", 00:29:54.631 "model_number": "SPDK bdev Controller", 00:29:54.631 "max_namespaces": 32, 00:29:54.631 "min_cntlid": 1, 00:29:54.631 "max_cntlid": 65519, 00:29:54.631 "namespaces": [ 00:29:54.631 { 00:29:54.631 "nsid": 1, 00:29:54.631 "bdev_name": "Malloc0", 00:29:54.631 "name": "Malloc0", 00:29:54.631 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:54.631 "eui64": "ABCDEF0123456789", 00:29:54.631 "uuid": "3a449b2d-ad71-48b2-b4ea-a83300d56199" 00:29:54.631 } 00:29:54.631 ] 00:29:54.631 } 00:29:54.631 ] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.631 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:54.631 [2024-11-17 11:24:19.133169] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:54.631 [2024-11-17 11:24:19.133221] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339189 ] 00:29:54.631 [2024-11-17 11:24:19.181554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:54.631 [2024-11-17 11:24:19.181620] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:54.631 [2024-11-17 11:24:19.181632] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:54.631 [2024-11-17 11:24:19.181648] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:54.631 [2024-11-17 11:24:19.181664] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:54.631 [2024-11-17 11:24:19.185961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:54.631 [2024-11-17 11:24:19.186033] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12d2650 0 00:29:54.631 [2024-11-17 11:24:19.192542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:54.631 [2024-11-17 11:24:19.192567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:54.631 [2024-11-17 11:24:19.192577] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:54.631 [2024-11-17 11:24:19.192583] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:54.631 [2024-11-17 11:24:19.192626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.192640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.192648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.631 [2024-11-17 11:24:19.192668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:54.631 [2024-11-17 11:24:19.192695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.631 [2024-11-17 11:24:19.199539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.631 [2024-11-17 11:24:19.199557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.631 [2024-11-17 11:24:19.199565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.199572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.631 [2024-11-17 11:24:19.199594] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:54.631 [2024-11-17 11:24:19.199622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:54.631 [2024-11-17 11:24:19.199636] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:54.631 [2024-11-17 11:24:19.199660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.199669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.199675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.631 [2024-11-17 11:24:19.199687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.631 [2024-11-17 11:24:19.199712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.631 [2024-11-17 11:24:19.199843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.631 [2024-11-17 11:24:19.199856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.631 [2024-11-17 11:24:19.199863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.199870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.631 [2024-11-17 11:24:19.199879] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:54.631 [2024-11-17 11:24:19.199892] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:54.631 [2024-11-17 11:24:19.199905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.199912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.199918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.631 [2024-11-17 11:24:19.199929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.631 [2024-11-17 11:24:19.199950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.631 [2024-11-17 11:24:19.200023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.631 [2024-11-17 11:24:19.200037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.631 [2024-11-17 11:24:19.200045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.631 [2024-11-17 11:24:19.200061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:54.631 [2024-11-17 11:24:19.200076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:54.631 [2024-11-17 11:24:19.200088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200102] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.631 [2024-11-17 11:24:19.200112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.631 [2024-11-17 11:24:19.200133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.631 [2024-11-17 11:24:19.200210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.631 [2024-11-17 11:24:19.200224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.631 [2024-11-17 11:24:19.200230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.631 [2024-11-17 11:24:19.200246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:54.631 [2024-11-17 11:24:19.200267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.631 [2024-11-17 11:24:19.200293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.631 [2024-11-17 11:24:19.200314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.631 [2024-11-17 11:24:19.200416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.631 [2024-11-17 11:24:19.200428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.631 [2024-11-17 11:24:19.200434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.631 [2024-11-17 11:24:19.200441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.631 [2024-11-17 11:24:19.200449] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:54.632 [2024-11-17 11:24:19.200457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:54.632 [2024-11-17 11:24:19.200470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:54.632 [2024-11-17 11:24:19.200580] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:54.632 [2024-11-17 11:24:19.200590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:54.632 [2024-11-17 11:24:19.200606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.200613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.200619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.200629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.200651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.632 [2024-11-17 11:24:19.200780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.200793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.200800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.200806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.200815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:54.632 [2024-11-17 11:24:19.200831] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.200839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.200846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.200856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.200876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.632 [2024-11-17 11:24:19.200952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.200965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.200972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.200978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.200991] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:54.632 [2024-11-17 11:24:19.201000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:54.632 [2024-11-17 11:24:19.201015] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:54.632 [2024-11-17 11:24:19.201032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:54.632 [2024-11-17 11:24:19.201049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.201093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.632 [2024-11-17 11:24:19.201228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.632 [2024-11-17 11:24:19.201240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.632 [2024-11-17 11:24:19.201248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d2650): datao=0, datal=4096, cccid=0 00:29:54.632 [2024-11-17 11:24:19.201262] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132cf40) on tqpair(0x12d2650): expected_datao=0, payload_size=4096 00:29:54.632 [2024-11-17 11:24:19.201271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201283] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.201313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.201320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.201339] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:54.632 [2024-11-17 11:24:19.201348] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:54.632 [2024-11-17 11:24:19.201356] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:54.632 [2024-11-17 11:24:19.201370] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:54.632 [2024-11-17 11:24:19.201380] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:54.632 [2024-11-17 11:24:19.201388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:54.632 [2024-11-17 11:24:19.201408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:54.632 [2024-11-17 11:24:19.201421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:54.632 [2024-11-17 11:24:19.201471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.632 [2024-11-17 11:24:19.201600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.201615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.201621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.201640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.632 [2024-11-17 11:24:19.201673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.632 [2024-11-17 11:24:19.201703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.632 [2024-11-17 11:24:19.201733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.632 [2024-11-17 11:24:19.201763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:54.632 [2024-11-17 11:24:19.201778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:54.632 [2024-11-17 11:24:19.201788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.201795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.201805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.201842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132cf40, cid 0, qid 0 00:29:54.632 [2024-11-17 11:24:19.201853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d0c0, cid 1, qid 0 00:29:54.632 [2024-11-17 11:24:19.201861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d240, cid 2, qid 0 00:29:54.632 [2024-11-17 11:24:19.201869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.632 [2024-11-17 11:24:19.201877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d540, cid 4, qid 0 00:29:54.632 [2024-11-17 11:24:19.202015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.202029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.202036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d540) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.202061] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:54.632 [2024-11-17 11:24:19.202071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:54.632 [2024-11-17 11:24:19.202088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.202108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.202129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d540, cid 4, qid 0 00:29:54.632 [2024-11-17 11:24:19.202216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.632 [2024-11-17 11:24:19.202228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.632 [2024-11-17 11:24:19.202234] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202241] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d2650): datao=0, datal=4096, cccid=4 00:29:54.632 [2024-11-17 11:24:19.202248] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132d540) on tqpair(0x12d2650): expected_datao=0, payload_size=4096 00:29:54.632 [2024-11-17 11:24:19.202255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202265] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202273] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.202293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.202300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d540) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.202325] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:54.632 [2024-11-17 11:24:19.202363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.202386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.202398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.202421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.632 [2024-11-17 11:24:19.202448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d540, cid 4, qid 0 00:29:54.632 [2024-11-17 11:24:19.202460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d6c0, cid 5, qid 0 00:29:54.632 [2024-11-17 11:24:19.202635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.632 [2024-11-17 11:24:19.202649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.632 [2024-11-17 11:24:19.202656] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202662] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d2650): datao=0, datal=1024, cccid=4 00:29:54.632 [2024-11-17 11:24:19.202670] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132d540) on tqpair(0x12d2650): expected_datao=0, payload_size=1024 00:29:54.632 [2024-11-17 11:24:19.202681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202691] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202699] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.202717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.202723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.202730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d6c0) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.243593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.632 [2024-11-17 11:24:19.243612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.632 [2024-11-17 11:24:19.243620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.243627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d540) on tqpair=0x12d2650 00:29:54.632 [2024-11-17 11:24:19.243645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.243654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d2650) 00:29:54.632 [2024-11-17 11:24:19.243666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.632 [2024-11-17 11:24:19.243695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d540, cid 4, qid 0 00:29:54.632 [2024-11-17 11:24:19.243806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.632 [2024-11-17 11:24:19.243818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.632 [2024-11-17 11:24:19.243825] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.632 [2024-11-17 11:24:19.243832] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d2650): datao=0, datal=3072, cccid=4 00:29:54.632 [2024-11-17 11:24:19.243839] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132d540) on tqpair(0x12d2650): expected_datao=0, payload_size=3072 00:29:54.633 [2024-11-17 11:24:19.243847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.243857] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.243864] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.243884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.633 [2024-11-17 11:24:19.243895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.633 [2024-11-17 11:24:19.243902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.243909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d540) on tqpair=0x12d2650 00:29:54.633 [2024-11-17 11:24:19.243923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.243931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12d2650) 00:29:54.633 [2024-11-17 11:24:19.243942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.633 [2024-11-17 11:24:19.243970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d540, cid 4, qid 0 00:29:54.633 [2024-11-17 11:24:19.244069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.633 [2024-11-17 11:24:19.244082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.633 [2024-11-17 11:24:19.244089] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.244095] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12d2650): datao=0, datal=8, cccid=4 00:29:54.633 [2024-11-17 11:24:19.244102] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x132d540) on tqpair(0x12d2650): expected_datao=0, payload_size=8 00:29:54.633 [2024-11-17 11:24:19.244109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.244124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.244132] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.284617] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.633 [2024-11-17 11:24:19.284635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.633 [2024-11-17 11:24:19.284643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.633 [2024-11-17 11:24:19.284650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d540) on tqpair=0x12d2650 00:29:54.633 ===================================================== 00:29:54.633 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:54.633 ===================================================== 00:29:54.633 Controller Capabilities/Features 00:29:54.633 ================================ 00:29:54.633 Vendor ID: 0000 00:29:54.633 Subsystem Vendor ID: 0000 00:29:54.633 Serial Number: .................... 00:29:54.633 Model Number: ........................................ 00:29:54.633 Firmware Version: 25.01 00:29:54.633 Recommended Arb Burst: 0 00:29:54.633 IEEE OUI Identifier: 00 00 00 00:29:54.633 Multi-path I/O 00:29:54.633 May have multiple subsystem ports: No 00:29:54.633 May have multiple controllers: No 00:29:54.633 Associated with SR-IOV VF: No 00:29:54.633 Max Data Transfer Size: 131072 00:29:54.633 Max Number of Namespaces: 0 00:29:54.633 Max Number of I/O Queues: 1024 00:29:54.633 NVMe Specification Version (VS): 1.3 00:29:54.633 NVMe Specification Version (Identify): 1.3 00:29:54.633 Maximum Queue Entries: 128 00:29:54.633 Contiguous Queues Required: Yes 00:29:54.633 Arbitration Mechanisms Supported 00:29:54.633 Weighted Round Robin: Not Supported 00:29:54.633 Vendor Specific: Not Supported 00:29:54.633 Reset Timeout: 15000 ms 00:29:54.633 Doorbell Stride: 4 bytes 00:29:54.633 NVM Subsystem Reset: Not Supported 00:29:54.633 Command Sets Supported 00:29:54.633 NVM Command Set: Supported 00:29:54.633 Boot Partition: Not Supported 00:29:54.633 Memory Page Size Minimum: 4096 bytes 00:29:54.633 Memory Page Size Maximum: 4096 bytes 00:29:54.633 Persistent Memory Region: Not Supported 00:29:54.633 Optional Asynchronous Events Supported 00:29:54.633 Namespace Attribute Notices: Not Supported 00:29:54.633 Firmware Activation Notices: Not Supported 00:29:54.633 ANA Change Notices: Not Supported 00:29:54.633 PLE Aggregate Log Change Notices: Not Supported 00:29:54.633 LBA Status Info Alert Notices: Not Supported 00:29:54.633 EGE Aggregate Log Change Notices: Not Supported 00:29:54.633 Normal NVM Subsystem Shutdown event: Not Supported 00:29:54.633 Zone Descriptor Change Notices: Not Supported 00:29:54.633 Discovery Log Change Notices: Supported 00:29:54.633 Controller Attributes 00:29:54.633 128-bit Host Identifier: Not Supported 00:29:54.633 Non-Operational Permissive Mode: Not Supported 00:29:54.633 NVM Sets: Not Supported 00:29:54.633 Read Recovery Levels: Not Supported 00:29:54.633 Endurance Groups: Not Supported 00:29:54.633 Predictable Latency Mode: Not Supported 00:29:54.633 Traffic Based Keep ALive: Not Supported 00:29:54.633 Namespace Granularity: Not Supported 00:29:54.633 SQ Associations: Not Supported 00:29:54.633 UUID List: Not Supported 00:29:54.633 Multi-Domain Subsystem: Not Supported 00:29:54.633 Fixed Capacity Management: Not Supported 00:29:54.633 Variable Capacity Management: Not Supported 00:29:54.633 Delete Endurance Group: Not Supported 00:29:54.633 Delete NVM Set: Not Supported 00:29:54.633 Extended LBA Formats Supported: Not Supported 00:29:54.633 Flexible Data Placement Supported: Not Supported 00:29:54.633 00:29:54.633 Controller Memory Buffer Support 00:29:54.633 ================================ 00:29:54.633 Supported: No 00:29:54.633 00:29:54.633 Persistent Memory Region Support 00:29:54.633 ================================ 00:29:54.633 Supported: No 00:29:54.633 00:29:54.633 Admin Command Set Attributes 00:29:54.633 ============================ 00:29:54.633 Security Send/Receive: Not Supported 00:29:54.633 Format NVM: Not Supported 00:29:54.633 Firmware Activate/Download: Not Supported 00:29:54.633 Namespace Management: Not Supported 00:29:54.633 Device Self-Test: Not Supported 00:29:54.633 Directives: Not Supported 00:29:54.633 NVMe-MI: Not Supported 00:29:54.633 Virtualization Management: Not Supported 00:29:54.633 Doorbell Buffer Config: Not Supported 00:29:54.633 Get LBA Status Capability: Not Supported 00:29:54.633 Command & Feature Lockdown Capability: Not Supported 00:29:54.633 Abort Command Limit: 1 00:29:54.633 Async Event Request Limit: 4 00:29:54.633 Number of Firmware Slots: N/A 00:29:54.633 Firmware Slot 1 Read-Only: N/A 00:29:54.633 Firmware Activation Without Reset: N/A 00:29:54.633 Multiple Update Detection Support: N/A 00:29:54.633 Firmware Update Granularity: No Information Provided 00:29:54.633 Per-Namespace SMART Log: No 00:29:54.633 Asymmetric Namespace Access Log Page: Not Supported 00:29:54.633 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:54.633 Command Effects Log Page: Not Supported 00:29:54.633 Get Log Page Extended Data: Supported 00:29:54.633 Telemetry Log Pages: Not Supported 00:29:54.633 Persistent Event Log Pages: Not Supported 00:29:54.633 Supported Log Pages Log Page: May Support 00:29:54.633 Commands Supported & Effects Log Page: Not Supported 00:29:54.633 Feature Identifiers & Effects Log Page:May Support 00:29:54.633 NVMe-MI Commands & Effects Log Page: May Support 00:29:54.633 Data Area 4 for Telemetry Log: Not Supported 00:29:54.633 Error Log Page Entries Supported: 128 00:29:54.633 Keep Alive: Not Supported 00:29:54.633 00:29:54.633 NVM Command Set Attributes 00:29:54.633 ========================== 00:29:54.633 Submission Queue Entry Size 00:29:54.633 Max: 1 00:29:54.633 Min: 1 00:29:54.633 Completion Queue Entry Size 00:29:54.633 Max: 1 00:29:54.633 Min: 1 00:29:54.633 Number of Namespaces: 0 00:29:54.633 Compare Command: Not Supported 00:29:54.633 Write Uncorrectable Command: Not Supported 00:29:54.633 Dataset Management Command: Not Supported 00:29:54.633 Write Zeroes Command: Not Supported 00:29:54.633 Set Features Save Field: Not Supported 00:29:54.633 Reservations: Not Supported 00:29:54.633 Timestamp: Not Supported 00:29:54.633 Copy: Not Supported 00:29:54.633 Volatile Write Cache: Not Present 00:29:54.633 Atomic Write Unit (Normal): 1 00:29:54.633 Atomic Write Unit (PFail): 1 00:29:54.633 Atomic Compare & Write Unit: 1 00:29:54.633 Fused Compare & Write: Supported 00:29:54.633 Scatter-Gather List 00:29:54.633 SGL Command Set: Supported 00:29:54.633 SGL Keyed: Supported 00:29:54.633 SGL Bit Bucket Descriptor: Not Supported 00:29:54.633 SGL Metadata Pointer: Not Supported 00:29:54.633 Oversized SGL: Not Supported 00:29:54.633 SGL Metadata Address: Not Supported 00:29:54.633 SGL Offset: Supported 00:29:54.633 Transport SGL Data Block: Not Supported 00:29:54.633 Replay Protected Memory Block: Not Supported 00:29:54.633 00:29:54.633 Firmware Slot Information 00:29:54.633 ========================= 00:29:54.633 Active slot: 0 00:29:54.633 00:29:54.633 00:29:54.633 Error Log 00:29:54.633 ========= 00:29:54.633 00:29:54.633 Active Namespaces 00:29:54.633 ================= 00:29:54.633 Discovery Log Page 00:29:54.634 ================== 00:29:54.634 Generation Counter: 2 00:29:54.634 Number of Records: 2 00:29:54.634 Record Format: 0 00:29:54.634 00:29:54.634 Discovery Log Entry 0 00:29:54.634 ---------------------- 00:29:54.634 Transport Type: 3 (TCP) 00:29:54.634 Address Family: 1 (IPv4) 00:29:54.634 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:54.634 Entry Flags: 00:29:54.634 Duplicate Returned Information: 1 00:29:54.634 Explicit Persistent Connection Support for Discovery: 1 00:29:54.634 Transport Requirements: 00:29:54.634 Secure Channel: Not Required 00:29:54.634 Port ID: 0 (0x0000) 00:29:54.634 Controller ID: 65535 (0xffff) 00:29:54.634 Admin Max SQ Size: 128 00:29:54.634 Transport Service Identifier: 4420 00:29:54.634 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:54.634 Transport Address: 10.0.0.2 00:29:54.634 Discovery Log Entry 1 00:29:54.634 ---------------------- 00:29:54.634 Transport Type: 3 (TCP) 00:29:54.634 Address Family: 1 (IPv4) 00:29:54.634 Subsystem Type: 2 (NVM Subsystem) 00:29:54.634 Entry Flags: 00:29:54.634 Duplicate Returned Information: 0 00:29:54.634 Explicit Persistent Connection Support for Discovery: 0 00:29:54.634 Transport Requirements: 00:29:54.634 Secure Channel: Not Required 00:29:54.634 Port ID: 0 (0x0000) 00:29:54.634 Controller ID: 65535 (0xffff) 00:29:54.634 Admin Max SQ Size: 128 00:29:54.634 Transport Service Identifier: 4420 00:29:54.634 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:54.634 Transport Address: 10.0.0.2 [2024-11-17 11:24:19.284769] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:54.902 [2024-11-17 11:24:19.284792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132cf40) on tqpair=0x12d2650 00:29:54.902 [2024-11-17 11:24:19.284816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.902 [2024-11-17 11:24:19.284825] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d0c0) on tqpair=0x12d2650 00:29:54.902 [2024-11-17 11:24:19.284833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.902 [2024-11-17 11:24:19.284841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d240) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.284848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.903 [2024-11-17 11:24:19.284856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.284863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.903 [2024-11-17 11:24:19.284881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.284890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.284897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.284908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.284933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.285014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.285028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.285034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.285054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.285078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.285103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.285193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.285206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.285213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.285229] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:54.903 [2024-11-17 11:24:19.285237] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:54.903 [2024-11-17 11:24:19.285257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.285283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.285303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.285431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.285442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.285449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.285472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.285497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.285520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.285611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.285625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.285631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.285654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.285679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.285700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.285816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.285829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.285835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285842] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.285858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.285873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.285883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.285903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.285979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.285992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.285999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.286026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.286052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.286072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.286160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.286172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.286178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.286200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.286225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.286245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.286322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.286335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.286341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.286364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.286389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.286409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.286485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.286498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.286505] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.286539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.286566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.286586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.286660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.286673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.286680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.903 [2024-11-17 11:24:19.286703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.903 [2024-11-17 11:24:19.286722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.903 [2024-11-17 11:24:19.286733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.903 [2024-11-17 11:24:19.286753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.903 [2024-11-17 11:24:19.286889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.903 [2024-11-17 11:24:19.286902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.903 [2024-11-17 11:24:19.286908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.286915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.286931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.286939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.286946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.286956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.286975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.287048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.287059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.287066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287073] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.287088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.287113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.287133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.287209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.287222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.287229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287235] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.287251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.287276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.287296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.287372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.287385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.287392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.287414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.287443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.287463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.287591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.287605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.287612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.287635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.287660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.287680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.287757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.287770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.287777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.287799] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.287824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.287844] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.287913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.287924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.287931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.287953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.287968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.287978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.287997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.288070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.288081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.288088] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.288110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.288139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.288160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.288233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.288244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.288251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.288273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.288297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.288317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.288390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.288403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.288410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.288433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.288447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.288457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.288477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.292552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.292568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.292575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.292581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.292613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.292623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.292629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12d2650) 00:29:54.904 [2024-11-17 11:24:19.292640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.904 [2024-11-17 11:24:19.292662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x132d3c0, cid 3, qid 0 00:29:54.904 [2024-11-17 11:24:19.292744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.904 [2024-11-17 11:24:19.292758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.904 [2024-11-17 11:24:19.292765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.904 [2024-11-17 11:24:19.292771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x132d3c0) on tqpair=0x12d2650 00:29:54.904 [2024-11-17 11:24:19.292784] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:29:54.904 00:29:54.904 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:54.905 [2024-11-17 11:24:19.324648] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:29:54.905 [2024-11-17 11:24:19.324688] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339196 ] 00:29:54.905 [2024-11-17 11:24:19.372247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:54.905 [2024-11-17 11:24:19.372300] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:54.905 [2024-11-17 11:24:19.372311] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:54.905 [2024-11-17 11:24:19.372325] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:54.905 [2024-11-17 11:24:19.372338] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:54.905 [2024-11-17 11:24:19.375788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:54.905 [2024-11-17 11:24:19.375827] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa12650 0 00:29:54.905 [2024-11-17 11:24:19.383542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:54.905 [2024-11-17 11:24:19.383562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:54.905 [2024-11-17 11:24:19.383570] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:54.905 [2024-11-17 11:24:19.383576] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:54.905 [2024-11-17 11:24:19.383611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.383625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.383631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.383645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:54.905 [2024-11-17 11:24:19.383671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.390684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.390704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.390712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.390719] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.390735] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:54.905 [2024-11-17 11:24:19.390750] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:54.905 [2024-11-17 11:24:19.390761] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:54.905 [2024-11-17 11:24:19.390779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.390788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.390795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.390807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.390832] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.390965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.390981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.390992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.390999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.391008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:54.905 [2024-11-17 11:24:19.391022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:54.905 [2024-11-17 11:24:19.391046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.391070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.391093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.391174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.391189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.391196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.391211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:54.905 [2024-11-17 11:24:19.391230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:54.905 [2024-11-17 11:24:19.391243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.391269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.391293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.391368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.391383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.391389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.391404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:54.905 [2024-11-17 11:24:19.391423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391433] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.391450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.391474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.391580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.391596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.391603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.391617] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:54.905 [2024-11-17 11:24:19.391630] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:54.905 [2024-11-17 11:24:19.391645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:54.905 [2024-11-17 11:24:19.391758] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:54.905 [2024-11-17 11:24:19.391767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:54.905 [2024-11-17 11:24:19.391779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.391793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.391803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.391826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.392022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.392037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.392044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.392051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.392059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:54.905 [2024-11-17 11:24:19.392077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.392088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.392094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.392105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.392127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.905 [2024-11-17 11:24:19.392257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.905 [2024-11-17 11:24:19.392272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.905 [2024-11-17 11:24:19.392279] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.392285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.905 [2024-11-17 11:24:19.392293] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:54.905 [2024-11-17 11:24:19.392302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:54.905 [2024-11-17 11:24:19.392316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:54.905 [2024-11-17 11:24:19.392333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:54.905 [2024-11-17 11:24:19.392347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.905 [2024-11-17 11:24:19.392355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.905 [2024-11-17 11:24:19.392366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.905 [2024-11-17 11:24:19.392387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.906 [2024-11-17 11:24:19.392540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.906 [2024-11-17 11:24:19.392556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.906 [2024-11-17 11:24:19.392566] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392578] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=4096, cccid=0 00:29:54.906 [2024-11-17 11:24:19.392588] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6cf40) on tqpair(0xa12650): expected_datao=0, payload_size=4096 00:29:54.906 [2024-11-17 11:24:19.392596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392614] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392626] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.906 [2024-11-17 11:24:19.392654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.906 [2024-11-17 11:24:19.392660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.906 [2024-11-17 11:24:19.392677] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:54.906 [2024-11-17 11:24:19.392685] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:54.906 [2024-11-17 11:24:19.392692] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:54.906 [2024-11-17 11:24:19.392704] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:54.906 [2024-11-17 11:24:19.392713] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:54.906 [2024-11-17 11:24:19.392721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.392740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.392755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.392779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:54.906 [2024-11-17 11:24:19.392802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.906 [2024-11-17 11:24:19.392882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.906 [2024-11-17 11:24:19.392897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.906 [2024-11-17 11:24:19.392903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.906 [2024-11-17 11:24:19.392921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.392948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.906 [2024-11-17 11:24:19.392958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.392971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.392986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.906 [2024-11-17 11:24:19.392997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.393018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.906 [2024-11-17 11:24:19.393028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.393049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.906 [2024-11-17 11:24:19.393058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.393120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.906 [2024-11-17 11:24:19.393142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6cf40, cid 0, qid 0 00:29:54.906 [2024-11-17 11:24:19.393153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d0c0, cid 1, qid 0 00:29:54.906 [2024-11-17 11:24:19.393161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d240, cid 2, qid 0 00:29:54.906 [2024-11-17 11:24:19.393183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.906 [2024-11-17 11:24:19.393191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.906 [2024-11-17 11:24:19.393370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.906 [2024-11-17 11:24:19.393386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.906 [2024-11-17 11:24:19.393392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.906 [2024-11-17 11:24:19.393413] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:54.906 [2024-11-17 11:24:19.393425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.393505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:54.906 [2024-11-17 11:24:19.393534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.906 [2024-11-17 11:24:19.393686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.906 [2024-11-17 11:24:19.393702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.906 [2024-11-17 11:24:19.393708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.906 [2024-11-17 11:24:19.393786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.393827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.393835] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.393846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.906 [2024-11-17 11:24:19.393868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.906 [2024-11-17 11:24:19.394009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.906 [2024-11-17 11:24:19.394024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.906 [2024-11-17 11:24:19.394030] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.394038] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=4096, cccid=4 00:29:54.906 [2024-11-17 11:24:19.394051] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d540) on tqpair(0xa12650): expected_datao=0, payload_size=4096 00:29:54.906 [2024-11-17 11:24:19.394060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.394078] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.394087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.434722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.906 [2024-11-17 11:24:19.434742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.906 [2024-11-17 11:24:19.434750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.434760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.906 [2024-11-17 11:24:19.434784] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:54.906 [2024-11-17 11:24:19.434803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.434823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:54.906 [2024-11-17 11:24:19.434839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.906 [2024-11-17 11:24:19.434847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.906 [2024-11-17 11:24:19.434859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.906 [2024-11-17 11:24:19.434883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.906 [2024-11-17 11:24:19.434992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.906 [2024-11-17 11:24:19.435010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.907 [2024-11-17 11:24:19.435022] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.435029] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=4096, cccid=4 00:29:54.907 [2024-11-17 11:24:19.435036] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d540) on tqpair(0xa12650): expected_datao=0, payload_size=4096 00:29:54.907 [2024-11-17 11:24:19.435048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.435067] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.435077] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.475684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.475703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.475711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.475718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.475742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.475763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.475782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.475790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.475801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.907 [2024-11-17 11:24:19.475826] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.907 [2024-11-17 11:24:19.475921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.907 [2024-11-17 11:24:19.475937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.907 [2024-11-17 11:24:19.475944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.475955] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=4096, cccid=4 00:29:54.907 [2024-11-17 11:24:19.475966] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d540) on tqpair(0xa12650): expected_datao=0, payload_size=4096 00:29:54.907 [2024-11-17 11:24:19.475974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.475993] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.476002] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.519557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.519579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.519601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519676] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:54.907 [2024-11-17 11:24:19.519688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:54.907 [2024-11-17 11:24:19.519697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:54.907 [2024-11-17 11:24:19.519717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.519737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.907 [2024-11-17 11:24:19.519749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.519771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.907 [2024-11-17 11:24:19.519798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.907 [2024-11-17 11:24:19.519811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d6c0, cid 5, qid 0 00:29:54.907 [2024-11-17 11:24:19.519909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.519924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.519931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.519948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.519957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.519963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d6c0) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.519987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.519998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.520008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.907 [2024-11-17 11:24:19.520031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d6c0, cid 5, qid 0 00:29:54.907 [2024-11-17 11:24:19.520116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.520132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.520139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.520146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d6c0) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.520162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.520174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.520185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.907 [2024-11-17 11:24:19.520206] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d6c0, cid 5, qid 0 00:29:54.907 [2024-11-17 11:24:19.520286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.520301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.520308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.520315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d6c0) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.520337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.520348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.520359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.907 [2024-11-17 11:24:19.520380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d6c0, cid 5, qid 0 00:29:54.907 [2024-11-17 11:24:19.520457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.907 [2024-11-17 11:24:19.520471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.907 [2024-11-17 11:24:19.520478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.520485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d6c0) on tqpair=0xa12650 00:29:54.907 [2024-11-17 11:24:19.520510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.520522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa12650) 00:29:54.907 [2024-11-17 11:24:19.524547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.907 [2024-11-17 11:24:19.524561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.907 [2024-11-17 11:24:19.524568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa12650) 00:29:54.908 [2024-11-17 11:24:19.524578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.908 [2024-11-17 11:24:19.524588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.524595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xa12650) 00:29:54.908 [2024-11-17 11:24:19.524604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.908 [2024-11-17 11:24:19.524615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.524622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa12650) 00:29:54.908 [2024-11-17 11:24:19.524631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.908 [2024-11-17 11:24:19.524654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d6c0, cid 5, qid 0 00:29:54.908 [2024-11-17 11:24:19.524681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d540, cid 4, qid 0 00:29:54.908 [2024-11-17 11:24:19.524689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d840, cid 6, qid 0 00:29:54.908 [2024-11-17 11:24:19.524697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d9c0, cid 7, qid 0 00:29:54.908 [2024-11-17 11:24:19.524881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.908 [2024-11-17 11:24:19.524903] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.908 [2024-11-17 11:24:19.524918] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.524926] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=8192, cccid=5 00:29:54.908 [2024-11-17 11:24:19.524934] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d6c0) on tqpair(0xa12650): expected_datao=0, payload_size=8192 00:29:54.908 [2024-11-17 11:24:19.524941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.524963] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.524977] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.524991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.908 [2024-11-17 11:24:19.525001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.908 [2024-11-17 11:24:19.525011] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525018] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=512, cccid=4 00:29:54.908 [2024-11-17 11:24:19.525025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d540) on tqpair(0xa12650): expected_datao=0, payload_size=512 00:29:54.908 [2024-11-17 11:24:19.525032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525042] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525049] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.908 [2024-11-17 11:24:19.525065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.908 [2024-11-17 11:24:19.525071] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525077] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=512, cccid=6 00:29:54.908 [2024-11-17 11:24:19.525085] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d840) on tqpair(0xa12650): expected_datao=0, payload_size=512 00:29:54.908 [2024-11-17 11:24:19.525092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525107] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:54.908 [2024-11-17 11:24:19.525124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:54.908 [2024-11-17 11:24:19.525130] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525136] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa12650): datao=0, datal=4096, cccid=7 00:29:54.908 [2024-11-17 11:24:19.525143] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d9c0) on tqpair(0xa12650): expected_datao=0, payload_size=4096 00:29:54.908 [2024-11-17 11:24:19.525150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525159] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525166] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.908 [2024-11-17 11:24:19.525182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.908 [2024-11-17 11:24:19.525203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d6c0) on tqpair=0xa12650 00:29:54.908 [2024-11-17 11:24:19.525228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.908 [2024-11-17 11:24:19.525239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.908 [2024-11-17 11:24:19.525245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d540) on tqpair=0xa12650 00:29:54.908 [2024-11-17 11:24:19.525283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.908 [2024-11-17 11:24:19.525293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.908 [2024-11-17 11:24:19.525299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d840) on tqpair=0xa12650 00:29:54.908 [2024-11-17 11:24:19.525315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.908 [2024-11-17 11:24:19.525324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.908 [2024-11-17 11:24:19.525329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.908 [2024-11-17 11:24:19.525335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d9c0) on tqpair=0xa12650 00:29:54.908 ===================================================== 00:29:54.908 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.908 ===================================================== 00:29:54.908 Controller Capabilities/Features 00:29:54.908 ================================ 00:29:54.908 Vendor ID: 8086 00:29:54.908 Subsystem Vendor ID: 8086 00:29:54.908 Serial Number: SPDK00000000000001 00:29:54.908 Model Number: SPDK bdev Controller 00:29:54.908 Firmware Version: 25.01 00:29:54.908 Recommended Arb Burst: 6 00:29:54.908 IEEE OUI Identifier: e4 d2 5c 00:29:54.908 Multi-path I/O 00:29:54.908 May have multiple subsystem ports: Yes 00:29:54.908 May have multiple controllers: Yes 00:29:54.908 Associated with SR-IOV VF: No 00:29:54.908 Max Data Transfer Size: 131072 00:29:54.908 Max Number of Namespaces: 32 00:29:54.908 Max Number of I/O Queues: 127 00:29:54.908 NVMe Specification Version (VS): 1.3 00:29:54.908 NVMe Specification Version (Identify): 1.3 00:29:54.908 Maximum Queue Entries: 128 00:29:54.908 Contiguous Queues Required: Yes 00:29:54.908 Arbitration Mechanisms Supported 00:29:54.908 Weighted Round Robin: Not Supported 00:29:54.908 Vendor Specific: Not Supported 00:29:54.908 Reset Timeout: 15000 ms 00:29:54.908 Doorbell Stride: 4 bytes 00:29:54.908 NVM Subsystem Reset: Not Supported 00:29:54.908 Command Sets Supported 00:29:54.908 NVM Command Set: Supported 00:29:54.908 Boot Partition: Not Supported 00:29:54.908 Memory Page Size Minimum: 4096 bytes 00:29:54.908 Memory Page Size Maximum: 4096 bytes 00:29:54.908 Persistent Memory Region: Not Supported 00:29:54.908 Optional Asynchronous Events Supported 00:29:54.908 Namespace Attribute Notices: Supported 00:29:54.908 Firmware Activation Notices: Not Supported 00:29:54.908 ANA Change Notices: Not Supported 00:29:54.908 PLE Aggregate Log Change Notices: Not Supported 00:29:54.908 LBA Status Info Alert Notices: Not Supported 00:29:54.908 EGE Aggregate Log Change Notices: Not Supported 00:29:54.908 Normal NVM Subsystem Shutdown event: Not Supported 00:29:54.908 Zone Descriptor Change Notices: Not Supported 00:29:54.908 Discovery Log Change Notices: Not Supported 00:29:54.908 Controller Attributes 00:29:54.908 128-bit Host Identifier: Supported 00:29:54.908 Non-Operational Permissive Mode: Not Supported 00:29:54.908 NVM Sets: Not Supported 00:29:54.908 Read Recovery Levels: Not Supported 00:29:54.908 Endurance Groups: Not Supported 00:29:54.908 Predictable Latency Mode: Not Supported 00:29:54.908 Traffic Based Keep ALive: Not Supported 00:29:54.908 Namespace Granularity: Not Supported 00:29:54.908 SQ Associations: Not Supported 00:29:54.908 UUID List: Not Supported 00:29:54.908 Multi-Domain Subsystem: Not Supported 00:29:54.908 Fixed Capacity Management: Not Supported 00:29:54.908 Variable Capacity Management: Not Supported 00:29:54.908 Delete Endurance Group: Not Supported 00:29:54.908 Delete NVM Set: Not Supported 00:29:54.908 Extended LBA Formats Supported: Not Supported 00:29:54.908 Flexible Data Placement Supported: Not Supported 00:29:54.908 00:29:54.908 Controller Memory Buffer Support 00:29:54.908 ================================ 00:29:54.908 Supported: No 00:29:54.908 00:29:54.908 Persistent Memory Region Support 00:29:54.908 ================================ 00:29:54.908 Supported: No 00:29:54.908 00:29:54.908 Admin Command Set Attributes 00:29:54.908 ============================ 00:29:54.908 Security Send/Receive: Not Supported 00:29:54.908 Format NVM: Not Supported 00:29:54.908 Firmware Activate/Download: Not Supported 00:29:54.908 Namespace Management: Not Supported 00:29:54.908 Device Self-Test: Not Supported 00:29:54.908 Directives: Not Supported 00:29:54.908 NVMe-MI: Not Supported 00:29:54.908 Virtualization Management: Not Supported 00:29:54.908 Doorbell Buffer Config: Not Supported 00:29:54.909 Get LBA Status Capability: Not Supported 00:29:54.909 Command & Feature Lockdown Capability: Not Supported 00:29:54.909 Abort Command Limit: 4 00:29:54.909 Async Event Request Limit: 4 00:29:54.909 Number of Firmware Slots: N/A 00:29:54.909 Firmware Slot 1 Read-Only: N/A 00:29:54.909 Firmware Activation Without Reset: N/A 00:29:54.909 Multiple Update Detection Support: N/A 00:29:54.909 Firmware Update Granularity: No Information Provided 00:29:54.909 Per-Namespace SMART Log: No 00:29:54.909 Asymmetric Namespace Access Log Page: Not Supported 00:29:54.909 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:54.909 Command Effects Log Page: Supported 00:29:54.909 Get Log Page Extended Data: Supported 00:29:54.909 Telemetry Log Pages: Not Supported 00:29:54.909 Persistent Event Log Pages: Not Supported 00:29:54.909 Supported Log Pages Log Page: May Support 00:29:54.909 Commands Supported & Effects Log Page: Not Supported 00:29:54.909 Feature Identifiers & Effects Log Page:May Support 00:29:54.909 NVMe-MI Commands & Effects Log Page: May Support 00:29:54.909 Data Area 4 for Telemetry Log: Not Supported 00:29:54.909 Error Log Page Entries Supported: 128 00:29:54.909 Keep Alive: Supported 00:29:54.909 Keep Alive Granularity: 10000 ms 00:29:54.909 00:29:54.909 NVM Command Set Attributes 00:29:54.909 ========================== 00:29:54.909 Submission Queue Entry Size 00:29:54.909 Max: 64 00:29:54.909 Min: 64 00:29:54.909 Completion Queue Entry Size 00:29:54.909 Max: 16 00:29:54.909 Min: 16 00:29:54.909 Number of Namespaces: 32 00:29:54.909 Compare Command: Supported 00:29:54.909 Write Uncorrectable Command: Not Supported 00:29:54.909 Dataset Management Command: Supported 00:29:54.909 Write Zeroes Command: Supported 00:29:54.909 Set Features Save Field: Not Supported 00:29:54.909 Reservations: Supported 00:29:54.909 Timestamp: Not Supported 00:29:54.909 Copy: Supported 00:29:54.909 Volatile Write Cache: Present 00:29:54.909 Atomic Write Unit (Normal): 1 00:29:54.909 Atomic Write Unit (PFail): 1 00:29:54.909 Atomic Compare & Write Unit: 1 00:29:54.909 Fused Compare & Write: Supported 00:29:54.909 Scatter-Gather List 00:29:54.909 SGL Command Set: Supported 00:29:54.909 SGL Keyed: Supported 00:29:54.909 SGL Bit Bucket Descriptor: Not Supported 00:29:54.909 SGL Metadata Pointer: Not Supported 00:29:54.909 Oversized SGL: Not Supported 00:29:54.909 SGL Metadata Address: Not Supported 00:29:54.909 SGL Offset: Supported 00:29:54.909 Transport SGL Data Block: Not Supported 00:29:54.909 Replay Protected Memory Block: Not Supported 00:29:54.909 00:29:54.909 Firmware Slot Information 00:29:54.909 ========================= 00:29:54.909 Active slot: 1 00:29:54.909 Slot 1 Firmware Revision: 25.01 00:29:54.909 00:29:54.909 00:29:54.909 Commands Supported and Effects 00:29:54.909 ============================== 00:29:54.909 Admin Commands 00:29:54.909 -------------- 00:29:54.909 Get Log Page (02h): Supported 00:29:54.909 Identify (06h): Supported 00:29:54.909 Abort (08h): Supported 00:29:54.909 Set Features (09h): Supported 00:29:54.909 Get Features (0Ah): Supported 00:29:54.909 Asynchronous Event Request (0Ch): Supported 00:29:54.909 Keep Alive (18h): Supported 00:29:54.909 I/O Commands 00:29:54.909 ------------ 00:29:54.909 Flush (00h): Supported LBA-Change 00:29:54.909 Write (01h): Supported LBA-Change 00:29:54.909 Read (02h): Supported 00:29:54.909 Compare (05h): Supported 00:29:54.909 Write Zeroes (08h): Supported LBA-Change 00:29:54.909 Dataset Management (09h): Supported LBA-Change 00:29:54.909 Copy (19h): Supported LBA-Change 00:29:54.909 00:29:54.909 Error Log 00:29:54.909 ========= 00:29:54.909 00:29:54.909 Arbitration 00:29:54.909 =========== 00:29:54.909 Arbitration Burst: 1 00:29:54.909 00:29:54.909 Power Management 00:29:54.909 ================ 00:29:54.909 Number of Power States: 1 00:29:54.909 Current Power State: Power State #0 00:29:54.909 Power State #0: 00:29:54.909 Max Power: 0.00 W 00:29:54.909 Non-Operational State: Operational 00:29:54.909 Entry Latency: Not Reported 00:29:54.909 Exit Latency: Not Reported 00:29:54.909 Relative Read Throughput: 0 00:29:54.909 Relative Read Latency: 0 00:29:54.909 Relative Write Throughput: 0 00:29:54.909 Relative Write Latency: 0 00:29:54.909 Idle Power: Not Reported 00:29:54.909 Active Power: Not Reported 00:29:54.909 Non-Operational Permissive Mode: Not Supported 00:29:54.909 00:29:54.909 Health Information 00:29:54.909 ================== 00:29:54.909 Critical Warnings: 00:29:54.909 Available Spare Space: OK 00:29:54.909 Temperature: OK 00:29:54.909 Device Reliability: OK 00:29:54.909 Read Only: No 00:29:54.909 Volatile Memory Backup: OK 00:29:54.909 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:54.909 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:54.909 Available Spare: 0% 00:29:54.909 Available Spare Threshold: 0% 00:29:54.909 Life Percentage Used:[2024-11-17 11:24:19.525447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.525461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xa12650) 00:29:54.909 [2024-11-17 11:24:19.525472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.909 [2024-11-17 11:24:19.525494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d9c0, cid 7, qid 0 00:29:54.909 [2024-11-17 11:24:19.525624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.909 [2024-11-17 11:24:19.525640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.909 [2024-11-17 11:24:19.525647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.525653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d9c0) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.525699] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:54.909 [2024-11-17 11:24:19.525720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6cf40) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.525733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.909 [2024-11-17 11:24:19.525742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d0c0) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.525750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.909 [2024-11-17 11:24:19.525758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d240) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.525765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.909 [2024-11-17 11:24:19.525773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.525780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.909 [2024-11-17 11:24:19.525792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.525800] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.525806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.909 [2024-11-17 11:24:19.525817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.909 [2024-11-17 11:24:19.525855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.909 [2024-11-17 11:24:19.526003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.909 [2024-11-17 11:24:19.526018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.909 [2024-11-17 11:24:19.526025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.526031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.526045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.526054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.526061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.909 [2024-11-17 11:24:19.526071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.909 [2024-11-17 11:24:19.526101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.909 [2024-11-17 11:24:19.526197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.909 [2024-11-17 11:24:19.526211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.909 [2024-11-17 11:24:19.526218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.526225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.909 [2024-11-17 11:24:19.526238] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:54.909 [2024-11-17 11:24:19.526249] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:54.909 [2024-11-17 11:24:19.526266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.909 [2024-11-17 11:24:19.526274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526280] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.526294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.526317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.526395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.526410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.526416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.526441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.526469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.526490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.526597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.526613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.526620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.526645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.526673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.526695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.526793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.526808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.526814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.526839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.526856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.526867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.526888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.526967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.526984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.526994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.527019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.527049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.527070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.527145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.527160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.527166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.527191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.527219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.527242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.527328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.527343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.527350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527358] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.527377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.527403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.527432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.527506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.527521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.527538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.527566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.527593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.527619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.527701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.527715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.527721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.527750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.527778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.527800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.527878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.527892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.527899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.527924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.527941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.527951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.527974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.528047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.528061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.528067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.528092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.528120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.528141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.528220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.528234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.528241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.528266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.910 [2024-11-17 11:24:19.528293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.910 [2024-11-17 11:24:19.528314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.910 [2024-11-17 11:24:19.528393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.910 [2024-11-17 11:24:19.528407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.910 [2024-11-17 11:24:19.528414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.910 [2024-11-17 11:24:19.528421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.910 [2024-11-17 11:24:19.528439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.911 [2024-11-17 11:24:19.528455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.911 [2024-11-17 11:24:19.528463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.911 [2024-11-17 11:24:19.528473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.911 [2024-11-17 11:24:19.528494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.911 [2024-11-17 11:24:19.532552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.911 [2024-11-17 11:24:19.532568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.911 [2024-11-17 11:24:19.532575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.911 [2024-11-17 11:24:19.532582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.911 [2024-11-17 11:24:19.532600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:54.911 [2024-11-17 11:24:19.532611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:54.911 [2024-11-17 11:24:19.532617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa12650) 00:29:54.911 [2024-11-17 11:24:19.532628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.911 [2024-11-17 11:24:19.532649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 3, qid 0 00:29:54.911 [2024-11-17 11:24:19.532765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:54.911 [2024-11-17 11:24:19.532780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:54.911 [2024-11-17 11:24:19.532787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:54.911 [2024-11-17 11:24:19.532793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa6d3c0) on tqpair=0xa12650 00:29:54.911 [2024-11-17 11:24:19.532807] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:29:54.911 0% 00:29:54.911 Data Units Read: 0 00:29:54.911 Data Units Written: 0 00:29:54.911 Host Read Commands: 0 00:29:54.911 Host Write Commands: 0 00:29:54.911 Controller Busy Time: 0 minutes 00:29:54.911 Power Cycles: 0 00:29:54.911 Power On Hours: 0 hours 00:29:54.911 Unsafe Shutdowns: 0 00:29:54.911 Unrecoverable Media Errors: 0 00:29:54.911 Lifetime Error Log Entries: 0 00:29:54.911 Warning Temperature Time: 0 minutes 00:29:54.911 Critical Temperature Time: 0 minutes 00:29:54.911 00:29:54.911 Number of Queues 00:29:54.911 ================ 00:29:54.911 Number of I/O Submission Queues: 127 00:29:54.911 Number of I/O Completion Queues: 127 00:29:54.911 00:29:54.911 Active Namespaces 00:29:54.911 ================= 00:29:54.911 Namespace ID:1 00:29:54.911 Error Recovery Timeout: Unlimited 00:29:54.911 Command Set Identifier: NVM (00h) 00:29:54.911 Deallocate: Supported 00:29:54.911 Deallocated/Unwritten Error: Not Supported 00:29:54.911 Deallocated Read Value: Unknown 00:29:54.911 Deallocate in Write Zeroes: Not Supported 00:29:54.911 Deallocated Guard Field: 0xFFFF 00:29:54.911 Flush: Supported 00:29:54.911 Reservation: Supported 00:29:54.911 Namespace Sharing Capabilities: Multiple Controllers 00:29:54.911 Size (in LBAs): 131072 (0GiB) 00:29:54.911 Capacity (in LBAs): 131072 (0GiB) 00:29:54.911 Utilization (in LBAs): 131072 (0GiB) 00:29:54.911 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:54.911 EUI64: ABCDEF0123456789 00:29:54.911 UUID: 3a449b2d-ad71-48b2-b4ea-a83300d56199 00:29:54.911 Thin Provisioning: Not Supported 00:29:54.911 Per-NS Atomic Units: Yes 00:29:54.911 Atomic Boundary Size (Normal): 0 00:29:54.911 Atomic Boundary Size (PFail): 0 00:29:54.911 Atomic Boundary Offset: 0 00:29:54.911 Maximum Single Source Range Length: 65535 00:29:54.911 Maximum Copy Length: 65535 00:29:54.911 Maximum Source Range Count: 1 00:29:54.911 NGUID/EUI64 Never Reused: No 00:29:54.911 Namespace Write Protected: No 00:29:54.911 Number of LBA Formats: 1 00:29:54.911 Current LBA Format: LBA Format #00 00:29:54.911 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:54.911 00:29:54.911 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:54.911 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.911 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.911 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.169 rmmod nvme_tcp 00:29:55.169 rmmod nvme_fabrics 00:29:55.169 rmmod nvme_keyring 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 339156 ']' 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 339156 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 339156 ']' 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 339156 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339156 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339156' 00:29:55.169 killing process with pid 339156 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 339156 00:29:55.169 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 339156 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.429 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.430 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.430 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.430 11:24:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.334 00:29:57.334 real 0m5.723s 00:29:57.334 user 0m4.709s 00:29:57.334 sys 0m2.098s 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.334 ************************************ 00:29:57.334 END TEST nvmf_identify 00:29:57.334 ************************************ 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.334 ************************************ 00:29:57.334 START TEST nvmf_perf 00:29:57.334 ************************************ 00:29:57.334 11:24:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:57.592 * Looking for test storage... 00:29:57.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:57.592 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:57.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.593 --rc genhtml_branch_coverage=1 00:29:57.593 --rc genhtml_function_coverage=1 00:29:57.593 --rc genhtml_legend=1 00:29:57.593 --rc geninfo_all_blocks=1 00:29:57.593 --rc geninfo_unexecuted_blocks=1 00:29:57.593 00:29:57.593 ' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:57.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.593 --rc genhtml_branch_coverage=1 00:29:57.593 --rc genhtml_function_coverage=1 00:29:57.593 --rc genhtml_legend=1 00:29:57.593 --rc geninfo_all_blocks=1 00:29:57.593 --rc geninfo_unexecuted_blocks=1 00:29:57.593 00:29:57.593 ' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:57.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.593 --rc genhtml_branch_coverage=1 00:29:57.593 --rc genhtml_function_coverage=1 00:29:57.593 --rc genhtml_legend=1 00:29:57.593 --rc geninfo_all_blocks=1 00:29:57.593 --rc geninfo_unexecuted_blocks=1 00:29:57.593 00:29:57.593 ' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:57.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.593 --rc genhtml_branch_coverage=1 00:29:57.593 --rc genhtml_function_coverage=1 00:29:57.593 --rc genhtml_legend=1 00:29:57.593 --rc geninfo_all_blocks=1 00:29:57.593 --rc geninfo_unexecuted_blocks=1 00:29:57.593 00:29:57.593 ' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.593 11:24:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.125 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:00.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:00.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:00.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:00.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:30:00.126 00:30:00.126 --- 10.0.0.2 ping statistics --- 00:30:00.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.126 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:00.126 00:30:00.126 --- 10.0.0.1 ping statistics --- 00:30:00.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.126 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=341247 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 341247 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 341247 ']' 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.126 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:00.126 [2024-11-17 11:24:24.563692] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:30:00.127 [2024-11-17 11:24:24.563789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.127 [2024-11-17 11:24:24.632278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:00.127 [2024-11-17 11:24:24.675883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.127 [2024-11-17 11:24:24.675944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.127 [2024-11-17 11:24:24.675966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.127 [2024-11-17 11:24:24.675976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.127 [2024-11-17 11:24:24.675985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.127 [2024-11-17 11:24:24.677501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.127 [2024-11-17 11:24:24.677568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.127 [2024-11-17 11:24:24.677634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.127 [2024-11-17 11:24:24.677636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:00.384 11:24:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:03.663 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:03.663 11:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:03.663 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:03.663 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.921 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:03.921 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:03.921 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:03.921 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:03.921 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:04.179 [2024-11-17 11:24:28.825014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.437 11:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:04.699 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:04.699 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:04.956 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:04.956 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:05.213 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.474 [2024-11-17 11:24:29.913000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.474 11:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.732 11:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:05.732 11:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:05.732 11:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:05.732 11:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:07.105 Initializing NVMe Controllers 00:30:07.105 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:07.105 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:07.105 Initialization complete. Launching workers. 00:30:07.105 ======================================================== 00:30:07.105 Latency(us) 00:30:07.105 Device Information : IOPS MiB/s Average min max 00:30:07.105 PCIE (0000:88:00.0) NSID 1 from core 0: 85036.20 332.17 375.57 42.85 5275.13 00:30:07.105 ======================================================== 00:30:07.105 Total : 85036.20 332.17 375.57 42.85 5275.13 00:30:07.105 00:30:07.105 11:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:08.478 Initializing NVMe Controllers 00:30:08.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:08.478 Initialization complete. Launching workers. 00:30:08.478 ======================================================== 00:30:08.478 Latency(us) 00:30:08.478 Device Information : IOPS MiB/s Average min max 00:30:08.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.00 0.26 15215.82 139.11 45030.05 00:30:08.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 44.00 0.17 23350.84 7937.92 47985.48 00:30:08.478 ======================================================== 00:30:08.478 Total : 111.00 0.43 18440.52 139.11 47985.48 00:30:08.478 00:30:08.478 11:24:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:09.851 Initializing NVMe Controllers 00:30:09.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:09.851 Initialization complete. Launching workers. 00:30:09.851 ======================================================== 00:30:09.851 Latency(us) 00:30:09.851 Device Information : IOPS MiB/s Average min max 00:30:09.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8490.95 33.17 3787.19 598.91 7659.94 00:30:09.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3880.98 15.16 8281.95 5758.27 16049.05 00:30:09.851 ======================================================== 00:30:09.851 Total : 12371.93 48.33 5197.16 598.91 16049.05 00:30:09.851 00:30:09.851 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:09.851 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:09.851 11:24:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:12.381 Initializing NVMe Controllers 00:30:12.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.381 Controller IO queue size 128, less than required. 00:30:12.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.381 Controller IO queue size 128, less than required. 00:30:12.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:12.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:12.381 Initialization complete. Launching workers. 00:30:12.381 ======================================================== 00:30:12.381 Latency(us) 00:30:12.381 Device Information : IOPS MiB/s Average min max 00:30:12.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1746.45 436.61 74302.33 47644.50 109578.11 00:30:12.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 544.98 136.25 244285.39 117234.71 335908.49 00:30:12.381 ======================================================== 00:30:12.381 Total : 2291.44 572.86 114730.33 47644.50 335908.49 00:30:12.381 00:30:12.381 11:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:12.639 No valid NVMe controllers or AIO or URING devices found 00:30:12.639 Initializing NVMe Controllers 00:30:12.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:12.639 Controller IO queue size 128, less than required. 00:30:12.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.639 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:12.639 Controller IO queue size 128, less than required. 00:30:12.639 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:12.639 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:12.639 WARNING: Some requested NVMe devices were skipped 00:30:12.896 11:24:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:15.421 Initializing NVMe Controllers 00:30:15.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:15.421 Controller IO queue size 128, less than required. 00:30:15.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.421 Controller IO queue size 128, less than required. 00:30:15.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:15.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:15.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:15.421 Initialization complete. Launching workers. 00:30:15.421 00:30:15.421 ==================== 00:30:15.421 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:15.421 TCP transport: 00:30:15.421 polls: 11498 00:30:15.421 idle_polls: 8540 00:30:15.421 sock_completions: 2958 00:30:15.421 nvme_completions: 5577 00:30:15.421 submitted_requests: 8354 00:30:15.421 queued_requests: 1 00:30:15.421 00:30:15.421 ==================== 00:30:15.421 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:15.421 TCP transport: 00:30:15.421 polls: 11777 00:30:15.421 idle_polls: 8261 00:30:15.421 sock_completions: 3516 00:30:15.421 nvme_completions: 6639 00:30:15.421 submitted_requests: 10014 00:30:15.421 queued_requests: 1 00:30:15.421 ======================================================== 00:30:15.421 Latency(us) 00:30:15.421 Device Information : IOPS MiB/s Average min max 00:30:15.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1392.35 348.09 93552.47 46565.29 165667.36 00:30:15.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1657.54 414.38 78053.83 45557.02 109985.03 00:30:15.421 ======================================================== 00:30:15.421 Total : 3049.89 762.47 85129.35 45557.02 165667.36 00:30:15.421 00:30:15.421 11:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:15.421 11:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.678 11:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:15.678 11:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:15.678 11:24:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=77c1e4ab-bb5f-4926-be3c-0bc8fd870b01 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 77c1e4ab-bb5f-4926-be3c-0bc8fd870b01 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=77c1e4ab-bb5f-4926-be3c-0bc8fd870b01 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:18.960 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:19.525 { 00:30:19.525 "uuid": "77c1e4ab-bb5f-4926-be3c-0bc8fd870b01", 00:30:19.525 "name": "lvs_0", 00:30:19.525 "base_bdev": "Nvme0n1", 00:30:19.525 "total_data_clusters": 238234, 00:30:19.525 "free_clusters": 238234, 00:30:19.525 "block_size": 512, 00:30:19.525 "cluster_size": 4194304 00:30:19.525 } 00:30:19.525 ]' 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="77c1e4ab-bb5f-4926-be3c-0bc8fd870b01") .free_clusters' 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="77c1e4ab-bb5f-4926-be3c-0bc8fd870b01") .cluster_size' 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:19.525 952936 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:19.525 11:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 77c1e4ab-bb5f-4926-be3c-0bc8fd870b01 lbd_0 20480 00:30:19.783 11:24:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=59623c82-6d0b-4a4c-9438-6faa81e9c119 00:30:19.783 11:24:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 59623c82-6d0b-4a4c-9438-6faa81e9c119 lvs_n_0 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=7d4e6c20-7ebb-433c-b0ed-2a73913ed173 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 7d4e6c20-7ebb-433c-b0ed-2a73913ed173 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=7d4e6c20-7ebb-433c-b0ed-2a73913ed173 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:21.155 { 00:30:21.155 "uuid": "77c1e4ab-bb5f-4926-be3c-0bc8fd870b01", 00:30:21.155 "name": "lvs_0", 00:30:21.155 "base_bdev": "Nvme0n1", 00:30:21.155 "total_data_clusters": 238234, 00:30:21.155 "free_clusters": 233114, 00:30:21.155 "block_size": 512, 00:30:21.155 "cluster_size": 4194304 00:30:21.155 }, 00:30:21.155 { 00:30:21.155 "uuid": "7d4e6c20-7ebb-433c-b0ed-2a73913ed173", 00:30:21.155 "name": "lvs_n_0", 00:30:21.155 "base_bdev": "59623c82-6d0b-4a4c-9438-6faa81e9c119", 00:30:21.155 "total_data_clusters": 5114, 00:30:21.155 "free_clusters": 5114, 00:30:21.155 "block_size": 512, 00:30:21.155 "cluster_size": 4194304 00:30:21.155 } 00:30:21.155 ]' 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="7d4e6c20-7ebb-433c-b0ed-2a73913ed173") .free_clusters' 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="7d4e6c20-7ebb-433c-b0ed-2a73913ed173") .cluster_size' 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:21.155 20456 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:21.155 11:24:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d4e6c20-7ebb-433c-b0ed-2a73913ed173 lbd_nest_0 20456 00:30:21.413 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=618ee239-53c9-4919-8a56-4c2994033e4f 00:30:21.413 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.671 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:21.671 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 618ee239-53c9-4919-8a56-4c2994033e4f 00:30:21.929 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.494 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:22.494 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:22.494 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:22.494 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:22.494 11:24:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.689 Initializing NVMe Controllers 00:30:34.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:34.689 Initialization complete. Launching workers. 00:30:34.689 ======================================================== 00:30:34.689 Latency(us) 00:30:34.689 Device Information : IOPS MiB/s Average min max 00:30:34.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.60 0.02 21521.90 171.09 44789.65 00:30:34.689 ======================================================== 00:30:34.689 Total : 46.60 0.02 21521.90 171.09 44789.65 00:30:34.689 00:30:34.689 11:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:34.689 11:24:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:42.791 Initializing NVMe Controllers 00:30:42.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:42.791 Initialization complete. Launching workers. 00:30:42.791 ======================================================== 00:30:42.791 Latency(us) 00:30:42.791 Device Information : IOPS MiB/s Average min max 00:30:42.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.97 9.62 12990.99 5008.79 50882.50 00:30:42.791 ======================================================== 00:30:42.791 Total : 76.97 9.62 12990.99 5008.79 50882.50 00:30:42.791 00:30:42.791 11:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:42.791 11:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:42.791 11:25:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.980 Initializing NVMe Controllers 00:30:54.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.981 Initialization complete. Launching workers. 00:30:54.981 ======================================================== 00:30:54.981 Latency(us) 00:30:54.981 Device Information : IOPS MiB/s Average min max 00:30:54.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7580.22 3.70 4231.26 278.98 46844.92 00:30:54.981 ======================================================== 00:30:54.981 Total : 7580.22 3.70 4231.26 278.98 46844.92 00:30:54.981 00:30:54.981 11:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:54.981 11:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.949 Initializing NVMe Controllers 00:31:04.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.949 Initialization complete. Launching workers. 00:31:04.949 ======================================================== 00:31:04.949 Latency(us) 00:31:04.949 Device Information : IOPS MiB/s Average min max 00:31:04.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3889.89 486.24 8226.80 770.06 16989.64 00:31:04.949 ======================================================== 00:31:04.949 Total : 3889.89 486.24 8226.80 770.06 16989.64 00:31:04.949 00:31:04.949 11:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:04.949 11:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:04.949 11:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:14.956 Initializing NVMe Controllers 00:31:14.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.956 Controller IO queue size 128, less than required. 00:31:14.956 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:14.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:14.956 Initialization complete. Launching workers. 00:31:14.956 ======================================================== 00:31:14.956 Latency(us) 00:31:14.956 Device Information : IOPS MiB/s Average min max 00:31:14.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11713.91 5.72 10932.00 1722.14 25584.62 00:31:14.956 ======================================================== 00:31:14.956 Total : 11713.91 5.72 10932.00 1722.14 25584.62 00:31:14.956 00:31:14.956 11:25:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:14.956 11:25:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:25.070 Initializing NVMe Controllers 00:31:25.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:25.070 Controller IO queue size 128, less than required. 00:31:25.070 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:25.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:25.070 Initialization complete. Launching workers. 00:31:25.070 ======================================================== 00:31:25.070 Latency(us) 00:31:25.070 Device Information : IOPS MiB/s Average min max 00:31:25.070 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1163.10 145.39 110202.16 9786.74 263566.86 00:31:25.070 ======================================================== 00:31:25.070 Total : 1163.10 145.39 110202.16 9786.74 263566.86 00:31:25.070 00:31:25.070 11:25:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.070 11:25:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 618ee239-53c9-4919-8a56-4c2994033e4f 00:31:25.328 11:25:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:25.585 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 59623c82-6d0b-4a4c-9438-6faa81e9c119 00:31:25.843 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.410 rmmod nvme_tcp 00:31:26.410 rmmod nvme_fabrics 00:31:26.410 rmmod nvme_keyring 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 341247 ']' 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 341247 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 341247 ']' 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 341247 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341247 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341247' 00:31:26.410 killing process with pid 341247 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 341247 00:31:26.410 11:25:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 341247 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:28.310 11:25:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:30.218 00:31:30.218 real 1m32.593s 00:31:30.218 user 5m42.561s 00:31:30.218 sys 0m15.871s 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:30.218 ************************************ 00:31:30.218 END TEST nvmf_perf 00:31:30.218 ************************************ 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.218 ************************************ 00:31:30.218 START TEST nvmf_fio_host 00:31:30.218 ************************************ 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:30.218 * Looking for test storage... 00:31:30.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:30.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.218 --rc genhtml_branch_coverage=1 00:31:30.218 --rc genhtml_function_coverage=1 00:31:30.218 --rc genhtml_legend=1 00:31:30.218 --rc geninfo_all_blocks=1 00:31:30.218 --rc geninfo_unexecuted_blocks=1 00:31:30.218 00:31:30.218 ' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:30.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.218 --rc genhtml_branch_coverage=1 00:31:30.218 --rc genhtml_function_coverage=1 00:31:30.218 --rc genhtml_legend=1 00:31:30.218 --rc geninfo_all_blocks=1 00:31:30.218 --rc geninfo_unexecuted_blocks=1 00:31:30.218 00:31:30.218 ' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:30.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.218 --rc genhtml_branch_coverage=1 00:31:30.218 --rc genhtml_function_coverage=1 00:31:30.218 --rc genhtml_legend=1 00:31:30.218 --rc geninfo_all_blocks=1 00:31:30.218 --rc geninfo_unexecuted_blocks=1 00:31:30.218 00:31:30.218 ' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:30.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.218 --rc genhtml_branch_coverage=1 00:31:30.218 --rc genhtml_function_coverage=1 00:31:30.218 --rc genhtml_legend=1 00:31:30.218 --rc geninfo_all_blocks=1 00:31:30.218 --rc geninfo_unexecuted_blocks=1 00:31:30.218 00:31:30.218 ' 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.218 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:30.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:30.219 11:25:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:32.751 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:32.752 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:32.752 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:32.752 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:32.752 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:32.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:31:32.752 00:31:32.752 --- 10.0.0.2 ping statistics --- 00:31:32.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.752 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:31:32.752 00:31:32.752 --- 10.0.0.1 ping statistics --- 00:31:32.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.752 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=353361 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:32.752 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 353361 00:31:32.753 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 353361 ']' 00:31:32.753 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.753 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:32.753 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.753 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:32.753 11:25:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:32.753 [2024-11-17 11:25:57.014381] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:31:32.753 [2024-11-17 11:25:57.014463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.753 [2024-11-17 11:25:57.085614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:32.753 [2024-11-17 11:25:57.131572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.753 [2024-11-17 11:25:57.131627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.753 [2024-11-17 11:25:57.131650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:32.753 [2024-11-17 11:25:57.131661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:32.753 [2024-11-17 11:25:57.131670] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.753 [2024-11-17 11:25:57.133179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.753 [2024-11-17 11:25:57.133246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:32.753 [2024-11-17 11:25:57.133313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:32.753 [2024-11-17 11:25:57.133316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.753 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.753 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:32.753 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:33.010 [2024-11-17 11:25:57.513579] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.010 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:33.010 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:33.010 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.010 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:33.268 Malloc1 00:31:33.268 11:25:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:33.526 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:33.783 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:34.041 [2024-11-17 11:25:58.649631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.041 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:34.299 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.556 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.556 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.556 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.556 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.556 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:34.556 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:34.557 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:34.557 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:34.557 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.557 11:25:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.557 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:34.557 fio-3.35 00:31:34.557 Starting 1 thread 00:31:37.083 00:31:37.084 test: (groupid=0, jobs=1): err= 0: pid=353715: Sun Nov 17 11:26:01 2024 00:31:37.084 read: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2006msec) 00:31:37.084 slat (nsec): min=1998, max=112796, avg=2601.21, stdev=1645.55 00:31:37.084 clat (usec): min=2087, max=14018, avg=7872.56, stdev=646.55 00:31:37.084 lat (usec): min=2107, max=14020, avg=7875.16, stdev=646.48 00:31:37.084 clat percentiles (usec): 00:31:37.084 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:31:37.084 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:31:37.084 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8848], 00:31:37.084 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11338], 99.95th=[12256], 00:31:37.084 | 99.99th=[13435] 00:31:37.084 bw ( KiB/s): min=34602, max=36080, per=99.83%, avg=35460.50, stdev=618.98, samples=4 00:31:37.084 iops : min= 8650, max= 9020, avg=8865.00, stdev=154.98, samples=4 00:31:37.084 write: IOPS=8891, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2006msec); 0 zone resets 00:31:37.084 slat (nsec): min=2151, max=88993, avg=2707.70, stdev=1280.50 00:31:37.084 clat (usec): min=1455, max=12277, avg=6494.89, stdev=537.42 00:31:37.084 lat (usec): min=1460, max=12279, avg=6497.60, stdev=537.42 00:31:37.084 clat percentiles (usec): 00:31:37.084 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:31:37.084 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:31:37.084 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:31:37.084 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[10683], 99.95th=[11207], 00:31:37.084 | 99.99th=[12125] 00:31:37.084 bw ( KiB/s): min=35328, max=35840, per=99.96%, avg=35550.25, stdev=219.65, samples=4 00:31:37.084 iops : min= 8832, max= 8960, avg=8887.50, stdev=54.95, samples=4 00:31:37.084 lat (msec) : 2=0.02%, 4=0.12%, 10=99.70%, 20=0.16% 00:31:37.084 cpu : usr=65.34%, sys=32.97%, ctx=77, majf=0, minf=41 00:31:37.084 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:37.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:37.084 issued rwts: total=17814,17836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.084 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:37.084 00:31:37.084 Run status group 0 (all jobs): 00:31:37.084 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2006-2006msec 00:31:37.084 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:37.084 11:26:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:37.341 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:37.341 fio-3.35 00:31:37.341 Starting 1 thread 00:31:39.870 00:31:39.870 test: (groupid=0, jobs=1): err= 0: pid=354165: Sun Nov 17 11:26:04 2024 00:31:39.870 read: IOPS=8456, BW=132MiB/s (139MB/s)(265MiB/2008msec) 00:31:39.870 slat (nsec): min=2815, max=93594, avg=3627.52, stdev=1591.66 00:31:39.870 clat (usec): min=2407, max=17126, avg=8626.13, stdev=1887.60 00:31:39.870 lat (usec): min=2410, max=17129, avg=8629.76, stdev=1887.61 00:31:39.870 clat percentiles (usec): 00:31:39.870 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 6980], 00:31:39.870 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:31:39.870 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[11731], 00:31:39.870 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14222], 99.95th=[14353], 00:31:39.870 | 99.99th=[14615] 00:31:39.870 bw ( KiB/s): min=61056, max=78944, per=51.16%, avg=69224.00, stdev=9368.97, samples=4 00:31:39.870 iops : min= 3816, max= 4934, avg=4326.50, stdev=585.56, samples=4 00:31:39.870 write: IOPS=4975, BW=77.7MiB/s (81.5MB/s)(141MiB/1816msec); 0 zone resets 00:31:39.870 slat (usec): min=30, max=158, avg=33.30, stdev= 4.97 00:31:39.870 clat (usec): min=5419, max=20203, avg=11341.60, stdev=1833.38 00:31:39.870 lat (usec): min=5451, max=20235, avg=11374.89, stdev=1833.13 00:31:39.870 clat percentiles (usec): 00:31:39.870 | 1.00th=[ 7701], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:31:39.870 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:31:39.870 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13829], 95.00th=[14615], 00:31:39.870 | 99.00th=[15664], 99.50th=[17433], 99.90th=[19530], 99.95th=[19792], 00:31:39.870 | 99.99th=[20317] 00:31:39.870 bw ( KiB/s): min=64000, max=81504, per=90.45%, avg=72008.00, stdev=9211.09, samples=4 00:31:39.870 iops : min= 4000, max= 5094, avg=4500.50, stdev=575.69, samples=4 00:31:39.870 lat (msec) : 4=0.20%, 10=59.21%, 20=40.58%, 50=0.01% 00:31:39.870 cpu : usr=78.43%, sys=20.43%, ctx=37, majf=0, minf=65 00:31:39.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:39.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:39.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:39.870 issued rwts: total=16981,9036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:39.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:39.870 00:31:39.870 Run status group 0 (all jobs): 00:31:39.870 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=265MiB (278MB), run=2008-2008msec 00:31:39.870 WRITE: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=141MiB (148MB), run=1816-1816msec 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:39.870 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:31:39.871 11:26:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:43.148 Nvme0n1 00:31:43.148 11:26:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=ce906a13-8cbc-48f5-a758-6aa8365d4f6c 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb ce906a13-8cbc-48f5-a758-6aa8365d4f6c 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ce906a13-8cbc-48f5-a758-6aa8365d4f6c 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:46.427 { 00:31:46.427 "uuid": "ce906a13-8cbc-48f5-a758-6aa8365d4f6c", 00:31:46.427 "name": "lvs_0", 00:31:46.427 "base_bdev": "Nvme0n1", 00:31:46.427 "total_data_clusters": 930, 00:31:46.427 "free_clusters": 930, 00:31:46.427 "block_size": 512, 00:31:46.427 "cluster_size": 1073741824 00:31:46.427 } 00:31:46.427 ]' 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ce906a13-8cbc-48f5-a758-6aa8365d4f6c") .free_clusters' 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ce906a13-8cbc-48f5-a758-6aa8365d4f6c") .cluster_size' 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:46.427 952320 00:31:46.427 11:26:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:46.683 47a9ec2a-ac78-4e82-abe1-e84722441a9e 00:31:46.683 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:46.940 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:47.197 11:26:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:47.762 11:26:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:47.762 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:47.762 fio-3.35 00:31:47.762 Starting 1 thread 00:31:50.288 00:31:50.288 test: (groupid=0, jobs=1): err= 0: pid=355451: Sun Nov 17 11:26:14 2024 00:31:50.288 read: IOPS=5454, BW=21.3MiB/s (22.3MB/s)(42.8MiB/2007msec) 00:31:50.288 slat (usec): min=2, max=151, avg= 2.55, stdev= 2.08 00:31:50.288 clat (usec): min=936, max=172333, avg=12767.61, stdev=12180.29 00:31:50.288 lat (usec): min=938, max=172369, avg=12770.16, stdev=12180.56 00:31:50.288 clat percentiles (msec): 00:31:50.288 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:31:50.288 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:31:50.288 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:31:50.288 | 99.00th=[ 15], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 174], 00:31:50.288 | 99.99th=[ 174] 00:31:50.288 bw ( KiB/s): min=15352, max=24096, per=99.64%, avg=21740.00, stdev=4262.46, samples=4 00:31:50.288 iops : min= 3838, max= 6024, avg=5435.00, stdev=1065.62, samples=4 00:31:50.288 write: IOPS=5431, BW=21.2MiB/s (22.2MB/s)(42.6MiB/2007msec); 0 zone resets 00:31:50.288 slat (nsec): min=2120, max=97535, avg=2636.91, stdev=1437.00 00:31:50.288 clat (usec): min=428, max=169928, avg=10570.03, stdev=11409.27 00:31:50.288 lat (usec): min=431, max=169952, avg=10572.66, stdev=11409.49 00:31:50.288 clat percentiles (msec): 00:31:50.288 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:31:50.288 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 11], 00:31:50.288 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:31:50.288 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 171], 00:31:50.288 | 99.99th=[ 171] 00:31:50.288 bw ( KiB/s): min=16232, max=23680, per=99.90%, avg=21706.00, stdev=3655.44, samples=4 00:31:50.288 iops : min= 4058, max= 5920, avg=5426.50, stdev=913.86, samples=4 00:31:50.288 lat (usec) : 500=0.01%, 1000=0.02% 00:31:50.288 lat (msec) : 2=0.02%, 4=0.09%, 10=32.16%, 20=67.12%, 250=0.59% 00:31:50.288 cpu : usr=62.96%, sys=35.84%, ctx=106, majf=0, minf=41 00:31:50.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:50.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.288 issued rwts: total=10948,10902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.288 00:31:50.288 Run status group 0 (all jobs): 00:31:50.288 READ: bw=21.3MiB/s (22.3MB/s), 21.3MiB/s-21.3MiB/s (22.3MB/s-22.3MB/s), io=42.8MiB (44.8MB), run=2007-2007msec 00:31:50.288 WRITE: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.6MiB (44.7MB), run=2007-2007msec 00:31:50.288 11:26:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:50.546 11:26:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:51.918 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=62301fb1-ca54-4b94-ab5b-a6f6970b3d7d 00:31:51.918 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 62301fb1-ca54-4b94-ab5b-a6f6970b3d7d 00:31:51.918 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=62301fb1-ca54-4b94-ab5b-a6f6970b3d7d 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:51.919 { 00:31:51.919 "uuid": "ce906a13-8cbc-48f5-a758-6aa8365d4f6c", 00:31:51.919 "name": "lvs_0", 00:31:51.919 "base_bdev": "Nvme0n1", 00:31:51.919 "total_data_clusters": 930, 00:31:51.919 "free_clusters": 0, 00:31:51.919 "block_size": 512, 00:31:51.919 "cluster_size": 1073741824 00:31:51.919 }, 00:31:51.919 { 00:31:51.919 "uuid": "62301fb1-ca54-4b94-ab5b-a6f6970b3d7d", 00:31:51.919 "name": "lvs_n_0", 00:31:51.919 "base_bdev": "47a9ec2a-ac78-4e82-abe1-e84722441a9e", 00:31:51.919 "total_data_clusters": 237847, 00:31:51.919 "free_clusters": 237847, 00:31:51.919 "block_size": 512, 00:31:51.919 "cluster_size": 4194304 00:31:51.919 } 00:31:51.919 ]' 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="62301fb1-ca54-4b94-ab5b-a6f6970b3d7d") .free_clusters' 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="62301fb1-ca54-4b94-ab5b-a6f6970b3d7d") .cluster_size' 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:51.919 951388 00:31:51.919 11:26:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:52.851 baab8878-ff52-4007-a36a-b0bed41eaa18 00:31:52.851 11:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:52.851 11:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:53.109 11:26:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:53.366 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:53.624 11:26:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.624 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:53.624 fio-3.35 00:31:53.624 Starting 1 thread 00:31:56.148 00:31:56.148 test: (groupid=0, jobs=1): err= 0: pid=356188: Sun Nov 17 11:26:20 2024 00:31:56.148 read: IOPS=5788, BW=22.6MiB/s (23.7MB/s)(45.4MiB/2008msec) 00:31:56.148 slat (usec): min=2, max=128, avg= 2.61, stdev= 1.94 00:31:56.148 clat (usec): min=4393, max=20997, avg=12130.49, stdev=1126.66 00:31:56.148 lat (usec): min=4397, max=21000, avg=12133.10, stdev=1126.57 00:31:56.148 clat percentiles (usec): 00:31:56.148 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:31:56.148 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:31:56.148 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:31:56.148 | 99.00th=[14615], 99.50th=[14877], 99.90th=[19006], 99.95th=[20055], 00:31:56.148 | 99.99th=[20841] 00:31:56.148 bw ( KiB/s): min=21848, max=23800, per=99.70%, avg=23086.00, stdev=855.70, samples=4 00:31:56.148 iops : min= 5462, max= 5950, avg=5771.50, stdev=213.92, samples=4 00:31:56.148 write: IOPS=5769, BW=22.5MiB/s (23.6MB/s)(45.3MiB/2008msec); 0 zone resets 00:31:56.148 slat (nsec): min=2190, max=93165, avg=2738.02, stdev=1487.78 00:31:56.148 clat (usec): min=2060, max=17393, avg=9833.47, stdev=899.91 00:31:56.148 lat (usec): min=2065, max=17396, avg=9836.21, stdev=899.89 00:31:56.148 clat percentiles (usec): 00:31:56.148 | 1.00th=[ 7767], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:31:56.148 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:31:56.148 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:31:56.148 | 99.00th=[11731], 99.50th=[12125], 99.90th=[14353], 99.95th=[15795], 00:31:56.148 | 99.99th=[17433] 00:31:56.148 bw ( KiB/s): min=22936, max=23232, per=99.99%, avg=23078.00, stdev=121.22, samples=4 00:31:56.148 iops : min= 5734, max= 5808, avg=5769.50, stdev=30.30, samples=4 00:31:56.148 lat (msec) : 4=0.05%, 10=30.15%, 20=69.77%, 50=0.04% 00:31:56.148 cpu : usr=63.73%, sys=34.93%, ctx=100, majf=0, minf=41 00:31:56.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:56.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:56.148 issued rwts: total=11624,11586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:56.148 00:31:56.148 Run status group 0 (all jobs): 00:31:56.148 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=45.4MiB (47.6MB), run=2008-2008msec 00:31:56.148 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2008-2008msec 00:31:56.148 11:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:56.406 11:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:56.406 11:26:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:00.587 11:26:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:00.587 11:26:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:03.866 11:26:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:03.866 11:26:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:05.762 rmmod nvme_tcp 00:32:05.762 rmmod nvme_fabrics 00:32:05.762 rmmod nvme_keyring 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 353361 ']' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 353361 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 353361 ']' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 353361 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353361 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353361' 00:32:05.762 killing process with pid 353361 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 353361 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 353361 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.762 11:26:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:08.295 00:32:08.295 real 0m37.826s 00:32:08.295 user 2m25.557s 00:32:08.295 sys 0m6.901s 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.295 ************************************ 00:32:08.295 END TEST nvmf_fio_host 00:32:08.295 ************************************ 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.295 ************************************ 00:32:08.295 START TEST nvmf_failover 00:32:08.295 ************************************ 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:08.295 * Looking for test storage... 00:32:08.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.295 --rc genhtml_branch_coverage=1 00:32:08.295 --rc genhtml_function_coverage=1 00:32:08.295 --rc genhtml_legend=1 00:32:08.295 --rc geninfo_all_blocks=1 00:32:08.295 --rc geninfo_unexecuted_blocks=1 00:32:08.295 00:32:08.295 ' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.295 --rc genhtml_branch_coverage=1 00:32:08.295 --rc genhtml_function_coverage=1 00:32:08.295 --rc genhtml_legend=1 00:32:08.295 --rc geninfo_all_blocks=1 00:32:08.295 --rc geninfo_unexecuted_blocks=1 00:32:08.295 00:32:08.295 ' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.295 --rc genhtml_branch_coverage=1 00:32:08.295 --rc genhtml_function_coverage=1 00:32:08.295 --rc genhtml_legend=1 00:32:08.295 --rc geninfo_all_blocks=1 00:32:08.295 --rc geninfo_unexecuted_blocks=1 00:32:08.295 00:32:08.295 ' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:08.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.295 --rc genhtml_branch_coverage=1 00:32:08.295 --rc genhtml_function_coverage=1 00:32:08.295 --rc genhtml_legend=1 00:32:08.295 --rc geninfo_all_blocks=1 00:32:08.295 --rc geninfo_unexecuted_blocks=1 00:32:08.295 00:32:08.295 ' 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.295 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:08.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:08.296 11:26:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:10.200 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:10.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:10.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:10.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:10.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:10.201 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:10.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:32:10.461 00:32:10.461 --- 10.0.0.2 ping statistics --- 00:32:10.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.461 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:10.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:32:10.461 00:32:10.461 --- 10.0.0.1 ping statistics --- 00:32:10.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.461 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=359556 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 359556 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 359556 ']' 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.461 11:26:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:10.461 [2024-11-17 11:26:35.008501] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:10.461 [2024-11-17 11:26:35.008587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.461 [2024-11-17 11:26:35.076712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:10.719 [2024-11-17 11:26:35.120787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.719 [2024-11-17 11:26:35.120859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.719 [2024-11-17 11:26:35.120873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.719 [2024-11-17 11:26:35.120884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.719 [2024-11-17 11:26:35.120893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.719 [2024-11-17 11:26:35.122323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.719 [2024-11-17 11:26:35.122380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.719 [2024-11-17 11:26:35.122383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.719 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:10.977 [2024-11-17 11:26:35.544983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.977 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:11.236 Malloc0 00:32:11.496 11:26:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:11.754 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:12.012 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:12.270 [2024-11-17 11:26:36.780298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:12.270 11:26:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:12.527 [2024-11-17 11:26:37.105301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:12.527 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:12.785 [2024-11-17 11:26:37.430263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=359846 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 359846 /var/tmp/bdevperf.sock 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 359846 ']' 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:13.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:13.043 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:13.301 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:13.301 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:13.301 11:26:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:13.865 NVMe0n1 00:32:13.866 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:14.123 00:32:14.123 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=359981 00:32:14.123 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:14.123 11:26:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:15.056 11:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:15.314 [2024-11-17 11:26:39.859225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831060 is same with the state(6) to be set 00:32:15.314 [2024-11-17 11:26:39.859293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831060 is same with the state(6) to be set 00:32:15.314 [2024-11-17 11:26:39.859317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831060 is same with the state(6) to be set 00:32:15.314 [2024-11-17 11:26:39.859330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831060 is same with the state(6) to be set 00:32:15.314 [2024-11-17 11:26:39.859342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831060 is same with the state(6) to be set 00:32:15.314 [2024-11-17 11:26:39.859355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831060 is same with the state(6) to be set 00:32:15.314 11:26:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:18.594 11:26:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.852 00:32:18.852 11:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:19.110 [2024-11-17 11:26:43.627403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 [2024-11-17 11:26:43.627465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 [2024-11-17 11:26:43.627481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 [2024-11-17 11:26:43.627494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 [2024-11-17 11:26:43.627507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 [2024-11-17 11:26:43.627519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 [2024-11-17 11:26:43.627543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8325a0 is same with the state(6) to be set 00:32:19.110 11:26:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:22.390 11:26:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:22.390 [2024-11-17 11:26:46.904486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.391 11:26:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:23.323 11:26:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:23.581 [2024-11-17 11:26:48.200595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.200991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.201002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.201013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.201029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.201041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.581 [2024-11-17 11:26:48.201052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 [2024-11-17 11:26:48.201352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x833790 is same with the state(6) to be set 00:32:23.582 11:26:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 359981 00:32:30.155 { 00:32:30.155 "results": [ 00:32:30.155 { 00:32:30.155 "job": "NVMe0n1", 00:32:30.155 "core_mask": "0x1", 00:32:30.155 "workload": "verify", 00:32:30.155 "status": "finished", 00:32:30.155 "verify_range": { 00:32:30.155 "start": 0, 00:32:30.155 "length": 16384 00:32:30.155 }, 00:32:30.155 "queue_depth": 128, 00:32:30.155 "io_size": 4096, 00:32:30.155 "runtime": 15.048642, 00:32:30.155 "iops": 8538.843571399997, 00:32:30.155 "mibps": 33.35485770078124, 00:32:30.155 "io_failed": 11645, 00:32:30.155 "io_timeout": 0, 00:32:30.155 "avg_latency_us": 13681.791338698753, 00:32:30.155 "min_latency_us": 594.6785185185186, 00:32:30.155 "max_latency_us": 46215.01629629629 00:32:30.155 } 00:32:30.155 ], 00:32:30.155 "core_count": 1 00:32:30.155 } 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 359846 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 359846 ']' 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 359846 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 359846 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 359846' 00:32:30.155 killing process with pid 359846 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 359846 00:32:30.155 11:26:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 359846 00:32:30.155 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:30.155 [2024-11-17 11:26:37.497582] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:30.155 [2024-11-17 11:26:37.497684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359846 ] 00:32:30.155 [2024-11-17 11:26:37.568213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.155 [2024-11-17 11:26:37.614785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.155 Running I/O for 15 seconds... 00:32:30.155 8620.00 IOPS, 33.67 MiB/s [2024-11-17T10:26:54.813Z] [2024-11-17 11:26:39.859643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.155 [2024-11-17 11:26:39.859971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.155 [2024-11-17 11:26:39.859986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.156 [2024-11-17 11:26:39.860969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.860984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.860997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.861024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.861052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.861080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.861107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.861134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.156 [2024-11-17 11:26:39.861170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.156 [2024-11-17 11:26:39.861184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.157 [2024-11-17 11:26:39.861456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.861979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.861993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.157 [2024-11-17 11:26:39.862408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.157 [2024-11-17 11:26:39.862422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.862982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.862997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.158 [2024-11-17 11:26:39.863455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.158 [2024-11-17 11:26:39.863483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.158 [2024-11-17 11:26:39.863529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.158 [2024-11-17 11:26:39.863579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.158 [2024-11-17 11:26:39.863612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.158 [2024-11-17 11:26:39.863642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.158 [2024-11-17 11:26:39.863671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.158 [2024-11-17 11:26:39.863686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f7340 is same with the state(6) to be set 00:32:30.159 [2024-11-17 11:26:39.863704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.159 [2024-11-17 11:26:39.863715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.159 [2024-11-17 11:26:39.863727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82432 len:8 PRP1 0x0 PRP2 0x0 00:32:30.159 [2024-11-17 11:26:39.863740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:39.863827] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:30.159 [2024-11-17 11:26:39.863895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.159 [2024-11-17 11:26:39.863931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:39.863948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.159 [2024-11-17 11:26:39.863961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:39.863975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.159 [2024-11-17 11:26:39.863989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:39.864004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.159 [2024-11-17 11:26:39.864018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:39.864032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:30.159 [2024-11-17 11:26:39.867337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:30.159 [2024-11-17 11:26:39.867378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da3b0 (9): Bad file descriptor 00:32:30.159 [2024-11-17 11:26:39.977480] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:30.159 8150.50 IOPS, 31.84 MiB/s [2024-11-17T10:26:54.817Z] 8396.67 IOPS, 32.80 MiB/s [2024-11-17T10:26:54.817Z] 8471.25 IOPS, 33.09 MiB/s [2024-11-17T10:26:54.817Z] [2024-11-17 11:26:43.627673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.627719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.627980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.627994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.159 [2024-11-17 11:26:43.628211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.159 [2024-11-17 11:26:43.628548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.159 [2024-11-17 11:26:43.628565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.628579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.628607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.628634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.628662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.628691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.628963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.628977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.160 [2024-11-17 11:26:43.629408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.160 [2024-11-17 11:26:43.629633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.160 [2024-11-17 11:26:43.629648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.629981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.629995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.161 [2024-11-17 11:26:43.630829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.161 [2024-11-17 11:26:43.630860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.162 [2024-11-17 11:26:43.630874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.630906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.630923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111280 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.630937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.630956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.630968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.630979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111288 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.630992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111296 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111304 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111312 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111320 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111328 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111336 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111344 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111352 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111360 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111368 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111376 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111384 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111392 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111400 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111408 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111416 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111424 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111432 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111440 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.631961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.631974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.631984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.631995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111448 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.632007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.632020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.162 [2024-11-17 11:26:43.632034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.162 [2024-11-17 11:26:43.632045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111456 len:8 PRP1 0x0 PRP2 0x0 00:32:30.162 [2024-11-17 11:26:43.632057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.162 [2024-11-17 11:26:43.632070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.163 [2024-11-17 11:26:43.632081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.163 [2024-11-17 11:26:43.632093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111464 len:8 PRP1 0x0 PRP2 0x0 00:32:30.163 [2024-11-17 11:26:43.632106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:43.632171] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:30.163 [2024-11-17 11:26:43.632210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.163 [2024-11-17 11:26:43.632244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:43.632260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.163 [2024-11-17 11:26:43.632273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:43.632287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.163 [2024-11-17 11:26:43.632300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:43.632314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.163 [2024-11-17 11:26:43.632327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:43.632341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:30.163 [2024-11-17 11:26:43.635640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:30.163 [2024-11-17 11:26:43.635682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da3b0 (9): Bad file descriptor 00:32:30.163 [2024-11-17 11:26:43.661493] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:30.163 8454.00 IOPS, 33.02 MiB/s [2024-11-17T10:26:54.821Z] 8488.50 IOPS, 33.16 MiB/s [2024-11-17T10:26:54.821Z] 8538.00 IOPS, 33.35 MiB/s [2024-11-17T10:26:54.821Z] 8536.12 IOPS, 33.34 MiB/s [2024-11-17T10:26:54.821Z] 8541.67 IOPS, 33.37 MiB/s [2024-11-17T10:26:54.821Z] [2024-11-17 11:26:48.201847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.201901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.201942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.201958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.201974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.201988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.163 [2024-11-17 11:26:48.202858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.163 [2024-11-17 11:26:48.202887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.202902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.202915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.202929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.202942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.202956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.202969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.202983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.202996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.164 [2024-11-17 11:26:48.203390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.164 [2024-11-17 11:26:48.203899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.164 [2024-11-17 11:26:48.203913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.203926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.203943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.203957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.203971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.203984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.203999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:30.165 [2024-11-17 11:26:48.204120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.204979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.204993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.205005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.205019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.205032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.205047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.205059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.205080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.205093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.205110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:30.165 [2024-11-17 11:26:48.205123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.165 [2024-11-17 11:26:48.205154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40832 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40840 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40848 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40856 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40864 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40872 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40880 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40888 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40896 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40904 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40912 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40920 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40928 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40936 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40944 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.205942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.205958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.205969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.205988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40952 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40336 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40344 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40352 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40360 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40368 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40376 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:30.166 [2024-11-17 11:26:48.206327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:30.166 [2024-11-17 11:26:48.206338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40384 len:8 PRP1 0x0 PRP2 0x0 00:32:30.166 [2024-11-17 11:26:48.206354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.166 [2024-11-17 11:26:48.206418] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:30.166 [2024-11-17 11:26:48.206483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.166 [2024-11-17 11:26:48.206503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.167 [2024-11-17 11:26:48.206519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.167 [2024-11-17 11:26:48.206548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.167 [2024-11-17 11:26:48.206563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.167 [2024-11-17 11:26:48.206577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.167 [2024-11-17 11:26:48.206591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:30.167 [2024-11-17 11:26:48.206604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:30.167 [2024-11-17 11:26:48.206618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:30.167 [2024-11-17 11:26:48.206685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da3b0 (9): Bad file descriptor 00:32:30.167 [2024-11-17 11:26:48.209984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:30.167 [2024-11-17 11:26:48.352908] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:30.167 8420.20 IOPS, 32.89 MiB/s [2024-11-17T10:26:54.825Z] 8446.09 IOPS, 32.99 MiB/s [2024-11-17T10:26:54.825Z] 8484.50 IOPS, 33.14 MiB/s [2024-11-17T10:26:54.825Z] 8515.00 IOPS, 33.26 MiB/s [2024-11-17T10:26:54.825Z] 8543.50 IOPS, 33.37 MiB/s [2024-11-17T10:26:54.825Z] 8566.13 IOPS, 33.46 MiB/s 00:32:30.167 Latency(us) 00:32:30.167 [2024-11-17T10:26:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.167 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:30.167 Verification LBA range: start 0x0 length 0x4000 00:32:30.167 NVMe0n1 : 15.05 8538.84 33.35 773.82 0.00 13681.79 594.68 46215.02 00:32:30.167 [2024-11-17T10:26:54.825Z] =================================================================================================================== 00:32:30.167 [2024-11-17T10:26:54.825Z] Total : 8538.84 33.35 773.82 0.00 13681.79 594.68 46215.02 00:32:30.167 Received shutdown signal, test time was about 15.000000 seconds 00:32:30.167 00:32:30.167 Latency(us) 00:32:30.167 [2024-11-17T10:26:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.167 [2024-11-17T10:26:54.825Z] =================================================================================================================== 00:32:30.167 [2024-11-17T10:26:54.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=361820 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 361820 /var/tmp/bdevperf.sock 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 361820 ']' 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:30.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:30.167 [2024-11-17 11:26:54.542187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:30.167 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:30.424 [2024-11-17 11:26:54.818932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:30.424 11:26:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:30.681 NVMe0n1 00:32:30.681 11:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:31.246 00:32:31.246 11:26:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:31.503 00:32:31.503 11:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:31.503 11:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:31.761 11:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:32.018 11:26:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:35.297 11:26:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:35.297 11:26:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:35.297 11:26:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=362490 00:32:35.297 11:26:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.297 11:26:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 362490 00:32:36.669 { 00:32:36.669 "results": [ 00:32:36.669 { 00:32:36.669 "job": "NVMe0n1", 00:32:36.669 "core_mask": "0x1", 00:32:36.669 "workload": "verify", 00:32:36.669 "status": "finished", 00:32:36.669 "verify_range": { 00:32:36.669 "start": 0, 00:32:36.669 "length": 16384 00:32:36.669 }, 00:32:36.669 "queue_depth": 128, 00:32:36.669 "io_size": 4096, 00:32:36.669 "runtime": 1.005123, 00:32:36.669 "iops": 8735.24931774519, 00:32:36.669 "mibps": 34.12206764744215, 00:32:36.669 "io_failed": 0, 00:32:36.669 "io_timeout": 0, 00:32:36.669 "avg_latency_us": 14594.722891419893, 00:32:36.669 "min_latency_us": 3106.8918518518517, 00:32:36.669 "max_latency_us": 14175.194074074074 00:32:36.669 } 00:32:36.669 ], 00:32:36.669 "core_count": 1 00:32:36.669 } 00:32:36.669 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:36.669 [2024-11-17 11:26:54.056004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:36.669 [2024-11-17 11:26:54.056102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361820 ] 00:32:36.669 [2024-11-17 11:26:54.124915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.669 [2024-11-17 11:26:54.168621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.669 [2024-11-17 11:26:56.584157] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:36.669 [2024-11-17 11:26:56.584257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.669 [2024-11-17 11:26:56.584282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.669 [2024-11-17 11:26:56.584299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.669 [2024-11-17 11:26:56.584313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.669 [2024-11-17 11:26:56.584326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.669 [2024-11-17 11:26:56.584339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.669 [2024-11-17 11:26:56.584354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:36.669 [2024-11-17 11:26:56.584367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.669 [2024-11-17 11:26:56.584387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:36.669 [2024-11-17 11:26:56.584436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:36.669 [2024-11-17 11:26:56.584469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c13b0 (9): Bad file descriptor 00:32:36.669 [2024-11-17 11:26:56.716677] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:36.669 Running I/O for 1 seconds... 00:32:36.669 8652.00 IOPS, 33.80 MiB/s 00:32:36.669 Latency(us) 00:32:36.669 [2024-11-17T10:27:01.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.669 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:36.669 Verification LBA range: start 0x0 length 0x4000 00:32:36.669 NVMe0n1 : 1.01 8735.25 34.12 0.00 0.00 14594.72 3106.89 14175.19 00:32:36.669 [2024-11-17T10:27:01.327Z] =================================================================================================================== 00:32:36.669 [2024-11-17T10:27:01.327Z] Total : 8735.25 34.12 0.00 0.00 14594.72 3106.89 14175.19 00:32:36.669 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:36.669 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:36.928 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.185 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:37.185 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:37.442 11:27:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:37.700 11:27:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 361820 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 361820 ']' 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 361820 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 361820 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 361820' 00:32:40.983 killing process with pid 361820 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 361820 00:32:40.983 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 361820 00:32:41.241 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:41.241 11:27:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.499 rmmod nvme_tcp 00:32:41.499 rmmod nvme_fabrics 00:32:41.499 rmmod nvme_keyring 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 359556 ']' 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 359556 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 359556 ']' 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 359556 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 359556 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 359556' 00:32:41.499 killing process with pid 359556 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 359556 00:32:41.499 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 359556 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.757 11:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.293 00:32:44.293 real 0m35.915s 00:32:44.293 user 2m7.023s 00:32:44.293 sys 0m5.943s 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:44.293 ************************************ 00:32:44.293 END TEST nvmf_failover 00:32:44.293 ************************************ 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.293 ************************************ 00:32:44.293 START TEST nvmf_host_discovery 00:32:44.293 ************************************ 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:44.293 * Looking for test storage... 00:32:44.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.293 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.294 --rc genhtml_branch_coverage=1 00:32:44.294 --rc genhtml_function_coverage=1 00:32:44.294 --rc genhtml_legend=1 00:32:44.294 --rc geninfo_all_blocks=1 00:32:44.294 --rc geninfo_unexecuted_blocks=1 00:32:44.294 00:32:44.294 ' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.294 --rc genhtml_branch_coverage=1 00:32:44.294 --rc genhtml_function_coverage=1 00:32:44.294 --rc genhtml_legend=1 00:32:44.294 --rc geninfo_all_blocks=1 00:32:44.294 --rc geninfo_unexecuted_blocks=1 00:32:44.294 00:32:44.294 ' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.294 --rc genhtml_branch_coverage=1 00:32:44.294 --rc genhtml_function_coverage=1 00:32:44.294 --rc genhtml_legend=1 00:32:44.294 --rc geninfo_all_blocks=1 00:32:44.294 --rc geninfo_unexecuted_blocks=1 00:32:44.294 00:32:44.294 ' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:44.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.294 --rc genhtml_branch_coverage=1 00:32:44.294 --rc genhtml_function_coverage=1 00:32:44.294 --rc genhtml_legend=1 00:32:44.294 --rc geninfo_all_blocks=1 00:32:44.294 --rc geninfo_unexecuted_blocks=1 00:32:44.294 00:32:44.294 ' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:44.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.294 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.295 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.295 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.295 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.295 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.295 11:27:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:46.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:46.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.196 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:46.197 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:46.197 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:32:46.197 00:32:46.197 --- 10.0.0.2 ping statistics --- 00:32:46.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.197 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:32:46.197 00:32:46.197 --- 10.0.0.1 ping statistics --- 00:32:46.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.197 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=365808 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 365808 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 365808 ']' 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:46.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.197 11:27:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.456 [2024-11-17 11:27:10.865456] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:46.456 [2024-11-17 11:27:10.865561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:46.456 [2024-11-17 11:27:10.937822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.456 [2024-11-17 11:27:10.985554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.456 [2024-11-17 11:27:10.985637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.456 [2024-11-17 11:27:10.985652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.456 [2024-11-17 11:27:10.985664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.456 [2024-11-17 11:27:10.985675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.456 [2024-11-17 11:27:10.986279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.456 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.456 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:46.456 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:46.456 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:46.456 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.714 [2024-11-17 11:27:11.132755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.714 [2024-11-17 11:27:11.141021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.714 null0 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.714 null1 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.714 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=365855 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 365855 /tmp/host.sock 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 365855 ']' 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:46.715 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.715 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.715 [2024-11-17 11:27:11.218377] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:46.715 [2024-11-17 11:27:11.218458] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365855 ] 00:32:46.715 [2024-11-17 11:27:11.284337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.715 [2024-11-17 11:27:11.329382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.973 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.974 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.232 [2024-11-17 11:27:11.718468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:47.232 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:47.233 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.490 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:47.490 11:27:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:48.056 [2024-11-17 11:27:12.500234] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:48.056 [2024-11-17 11:27:12.500268] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:48.056 [2024-11-17 11:27:12.500291] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:48.056 [2024-11-17 11:27:12.627699] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:48.313 [2024-11-17 11:27:12.808966] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:48.313 [2024-11-17 11:27:12.810048] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bf71b0:1 started. 00:32:48.313 [2024-11-17 11:27:12.811880] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:48.313 [2024-11-17 11:27:12.811903] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:48.313 [2024-11-17 11:27:12.859291] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bf71b0 was disconnected and freed. delete nvme_qpair. 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:48.313 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:48.572 11:27:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.572 [2024-11-17 11:27:13.081881] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1bf7b80:1 started. 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:48.572 [2024-11-17 11:27:13.089474] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1bf7b80 was disconnected and freed. delete nvme_qpair. 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:48.572 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.573 [2024-11-17 11:27:13.162801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:48.573 [2024-11-17 11:27:13.163837] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:48.573 [2024-11-17 11:27:13.163866] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:48.573 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:48.831 11:27:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:48.831 [2024-11-17 11:27:13.290690] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:48.831 [2024-11-17 11:27:13.394914] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:48.831 [2024-11-17 11:27:13.394983] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:48.831 [2024-11-17 11:27:13.394999] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:48.831 [2024-11-17 11:27:13.395008] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.765 [2024-11-17 11:27:14.382504] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:49.765 [2024-11-17 11:27:14.382544] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:49.765 [2024-11-17 11:27:14.383079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.765 [2024-11-17 11:27:14.383112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.765 [2024-11-17 11:27:14.383142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.765 [2024-11-17 11:27:14.383155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.765 [2024-11-17 11:27:14.383169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.765 [2024-11-17 11:27:14.383182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.765 [2024-11-17 11:27:14.383212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.765 [2024-11-17 11:27:14.383226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.765 [2024-11-17 11:27:14.383240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:49.765 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.766 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:49.766 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:49.766 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:49.766 [2024-11-17 11:27:14.393069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:49.766 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.766 [2024-11-17 11:27:14.403110] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:49.766 [2024-11-17 11:27:14.403132] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:49.766 [2024-11-17 11:27:14.403143] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:49.766 [2024-11-17 11:27:14.403156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:49.766 [2024-11-17 11:27:14.403186] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:49.766 [2024-11-17 11:27:14.403339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.766 [2024-11-17 11:27:14.403369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:49.766 [2024-11-17 11:27:14.403386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:49.766 [2024-11-17 11:27:14.403410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:49.766 [2024-11-17 11:27:14.403443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:49.766 [2024-11-17 11:27:14.403461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:49.766 [2024-11-17 11:27:14.403476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:49.766 [2024-11-17 11:27:14.403489] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:49.766 [2024-11-17 11:27:14.403515] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:49.766 [2024-11-17 11:27:14.403535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:49.766 [2024-11-17 11:27:14.413218] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:49.766 [2024-11-17 11:27:14.413238] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:49.766 [2024-11-17 11:27:14.413246] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:49.766 [2024-11-17 11:27:14.413253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:49.766 [2024-11-17 11:27:14.413293] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:49.766 [2024-11-17 11:27:14.413534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:49.766 [2024-11-17 11:27:14.413568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:49.766 [2024-11-17 11:27:14.413586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:49.766 [2024-11-17 11:27:14.413610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:49.766 [2024-11-17 11:27:14.413643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:49.766 [2024-11-17 11:27:14.413660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:49.766 [2024-11-17 11:27:14.413674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:49.766 [2024-11-17 11:27:14.413687] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:49.766 [2024-11-17 11:27:14.413697] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:49.766 [2024-11-17 11:27:14.413705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.025 [2024-11-17 11:27:14.423329] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:50.025 [2024-11-17 11:27:14.423352] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:50.025 [2024-11-17 11:27:14.423369] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:50.025 [2024-11-17 11:27:14.423379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.025 [2024-11-17 11:27:14.423421] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:50.025 [2024-11-17 11:27:14.423573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.025 [2024-11-17 11:27:14.423601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:50.025 [2024-11-17 11:27:14.423618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:50.025 [2024-11-17 11:27:14.423640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:50.025 [2024-11-17 11:27:14.423674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:50.025 [2024-11-17 11:27:14.423692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:50.025 [2024-11-17 11:27:14.423706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:50.025 [2024-11-17 11:27:14.423718] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:50.025 [2024-11-17 11:27:14.423727] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:50.025 [2024-11-17 11:27:14.423735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.025 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.025 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.025 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:50.026 [2024-11-17 11:27:14.433454] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:50.026 [2024-11-17 11:27:14.433477] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:50.026 [2024-11-17 11:27:14.433486] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.433493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.026 [2024-11-17 11:27:14.433543] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.433670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.026 [2024-11-17 11:27:14.433703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:50.026 [2024-11-17 11:27:14.433721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:50.026 [2024-11-17 11:27:14.433744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:50.026 [2024-11-17 11:27:14.433776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:50.026 [2024-11-17 11:27:14.433793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:50.026 [2024-11-17 11:27:14.433807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:50.026 [2024-11-17 11:27:14.433819] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:50.026 [2024-11-17 11:27:14.433828] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:50.026 [2024-11-17 11:27:14.433836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.026 [2024-11-17 11:27:14.443577] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:50.026 [2024-11-17 11:27:14.443602] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:50.026 [2024-11-17 11:27:14.443612] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.443620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.026 [2024-11-17 11:27:14.443647] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.443771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.026 [2024-11-17 11:27:14.443799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:50.026 [2024-11-17 11:27:14.443816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:50.026 [2024-11-17 11:27:14.443839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:50.026 [2024-11-17 11:27:14.443874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:50.026 [2024-11-17 11:27:14.443891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:50.026 [2024-11-17 11:27:14.443906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:50.026 [2024-11-17 11:27:14.443919] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:50.026 [2024-11-17 11:27:14.443928] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:50.026 [2024-11-17 11:27:14.443936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.026 [2024-11-17 11:27:14.453682] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:50.026 [2024-11-17 11:27:14.453704] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:50.026 [2024-11-17 11:27:14.453713] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.453721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.026 [2024-11-17 11:27:14.453746] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.453939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.026 [2024-11-17 11:27:14.453966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:50.026 [2024-11-17 11:27:14.453982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:50.026 [2024-11-17 11:27:14.454004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:50.026 [2024-11-17 11:27:14.454024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:50.026 [2024-11-17 11:27:14.454037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:50.026 [2024-11-17 11:27:14.454051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:50.026 [2024-11-17 11:27:14.454063] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:50.026 [2024-11-17 11:27:14.454071] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:50.026 [2024-11-17 11:27:14.454079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.026 [2024-11-17 11:27:14.463780] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:50.026 [2024-11-17 11:27:14.463817] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:50.026 [2024-11-17 11:27:14.463826] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.463834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:50.026 [2024-11-17 11:27:14.463857] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:50.026 [2024-11-17 11:27:14.464044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:50.026 [2024-11-17 11:27:14.464071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc91f0 with addr=10.0.0.2, port=4420 00:32:50.026 [2024-11-17 11:27:14.464088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc91f0 is same with the state(6) to be set 00:32:50.026 [2024-11-17 11:27:14.464109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc91f0 (9): Bad file descriptor 00:32:50.026 [2024-11-17 11:27:14.464142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:50.026 [2024-11-17 11:27:14.464160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:50.026 [2024-11-17 11:27:14.464173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:50.026 [2024-11-17 11:27:14.464185] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:50.026 [2024-11-17 11:27:14.464194] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:50.026 [2024-11-17 11:27:14.464202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:50.026 [2024-11-17 11:27:14.469629] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:50.026 [2024-11-17 11:27:14.469659] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:50.026 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:50.027 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.285 11:27:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.218 [2024-11-17 11:27:15.698219] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:51.218 [2024-11-17 11:27:15.698252] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:51.218 [2024-11-17 11:27:15.698275] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:51.218 [2024-11-17 11:27:15.784545] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:51.477 [2024-11-17 11:27:15.883333] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:51.477 [2024-11-17 11:27:15.884172] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1bc4950:1 started. 00:32:51.477 [2024-11-17 11:27:15.886282] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:51.477 [2024-11-17 11:27:15.886328] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.477 [2024-11-17 11:27:15.887989] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1bc4950 was disconnected and freed. delete nvme_qpair. 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.477 request: 00:32:51.477 { 00:32:51.477 "name": "nvme", 00:32:51.477 "trtype": "tcp", 00:32:51.477 "traddr": "10.0.0.2", 00:32:51.477 "adrfam": "ipv4", 00:32:51.477 "trsvcid": "8009", 00:32:51.477 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:51.477 "wait_for_attach": true, 00:32:51.477 "method": "bdev_nvme_start_discovery", 00:32:51.477 "req_id": 1 00:32:51.477 } 00:32:51.477 Got JSON-RPC error response 00:32:51.477 response: 00:32:51.477 { 00:32:51.477 "code": -17, 00:32:51.477 "message": "File exists" 00:32:51.477 } 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.477 request: 00:32:51.477 { 00:32:51.477 "name": "nvme_second", 00:32:51.477 "trtype": "tcp", 00:32:51.477 "traddr": "10.0.0.2", 00:32:51.477 "adrfam": "ipv4", 00:32:51.477 "trsvcid": "8009", 00:32:51.477 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:51.477 "wait_for_attach": true, 00:32:51.477 "method": "bdev_nvme_start_discovery", 00:32:51.477 "req_id": 1 00:32:51.477 } 00:32:51.477 Got JSON-RPC error response 00:32:51.477 response: 00:32:51.477 { 00:32:51.477 "code": -17, 00:32:51.477 "message": "File exists" 00:32:51.477 } 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:51.477 11:27:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:51.477 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.478 11:27:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:52.851 [2024-11-17 11:27:17.097751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:52.851 [2024-11-17 11:27:17.097793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc50f0 with addr=10.0.0.2, port=8010 00:32:52.851 [2024-11-17 11:27:17.097819] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:52.851 [2024-11-17 11:27:17.097833] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:52.851 [2024-11-17 11:27:17.097845] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:53.784 [2024-11-17 11:27:18.100098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:53.784 [2024-11-17 11:27:18.100146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc50f0 with addr=10.0.0.2, port=8010 00:32:53.784 [2024-11-17 11:27:18.100167] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:53.784 [2024-11-17 11:27:18.100180] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:53.784 [2024-11-17 11:27:18.100192] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:54.736 [2024-11-17 11:27:19.102384] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:54.737 request: 00:32:54.737 { 00:32:54.737 "name": "nvme_second", 00:32:54.737 "trtype": "tcp", 00:32:54.737 "traddr": "10.0.0.2", 00:32:54.737 "adrfam": "ipv4", 00:32:54.737 "trsvcid": "8010", 00:32:54.737 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:54.737 "wait_for_attach": false, 00:32:54.737 "attach_timeout_ms": 3000, 00:32:54.737 "method": "bdev_nvme_start_discovery", 00:32:54.737 "req_id": 1 00:32:54.737 } 00:32:54.737 Got JSON-RPC error response 00:32:54.737 response: 00:32:54.737 { 00:32:54.737 "code": -110, 00:32:54.737 "message": "Connection timed out" 00:32:54.737 } 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 365855 00:32:54.737 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.738 rmmod nvme_tcp 00:32:54.738 rmmod nvme_fabrics 00:32:54.738 rmmod nvme_keyring 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 365808 ']' 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 365808 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 365808 ']' 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 365808 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365808 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365808' 00:32:54.738 killing process with pid 365808 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 365808 00:32:54.738 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 365808 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:55.001 11:27:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.908 00:32:56.908 real 0m13.073s 00:32:56.908 user 0m18.693s 00:32:56.908 sys 0m2.836s 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.908 ************************************ 00:32:56.908 END TEST nvmf_host_discovery 00:32:56.908 ************************************ 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.908 11:27:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.167 ************************************ 00:32:57.167 START TEST nvmf_host_multipath_status 00:32:57.167 ************************************ 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:57.167 * Looking for test storage... 00:32:57.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.167 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:57.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.168 --rc genhtml_branch_coverage=1 00:32:57.168 --rc genhtml_function_coverage=1 00:32:57.168 --rc genhtml_legend=1 00:32:57.168 --rc geninfo_all_blocks=1 00:32:57.168 --rc geninfo_unexecuted_blocks=1 00:32:57.168 00:32:57.168 ' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:57.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.168 --rc genhtml_branch_coverage=1 00:32:57.168 --rc genhtml_function_coverage=1 00:32:57.168 --rc genhtml_legend=1 00:32:57.168 --rc geninfo_all_blocks=1 00:32:57.168 --rc geninfo_unexecuted_blocks=1 00:32:57.168 00:32:57.168 ' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:57.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.168 --rc genhtml_branch_coverage=1 00:32:57.168 --rc genhtml_function_coverage=1 00:32:57.168 --rc genhtml_legend=1 00:32:57.168 --rc geninfo_all_blocks=1 00:32:57.168 --rc geninfo_unexecuted_blocks=1 00:32:57.168 00:32:57.168 ' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:57.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.168 --rc genhtml_branch_coverage=1 00:32:57.168 --rc genhtml_function_coverage=1 00:32:57.168 --rc genhtml_legend=1 00:32:57.168 --rc geninfo_all_blocks=1 00:32:57.168 --rc geninfo_unexecuted_blocks=1 00:32:57.168 00:32:57.168 ' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:57.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.168 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.169 11:27:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:59.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:59.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.706 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:59.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:59.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:59.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:59.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:32:59.707 00:32:59.707 --- 10.0.0.2 ping statistics --- 00:32:59.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.707 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:59.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:59.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:32:59.707 00:32:59.707 --- 10.0.0.1 ping statistics --- 00:32:59.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:59.707 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=368892 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 368892 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 368892 ']' 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.707 11:27:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.707 [2024-11-17 11:27:23.964778] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:32:59.707 [2024-11-17 11:27:23.964886] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:59.707 [2024-11-17 11:27:24.038437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.707 [2024-11-17 11:27:24.086715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:59.707 [2024-11-17 11:27:24.086782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:59.707 [2024-11-17 11:27:24.086815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:59.707 [2024-11-17 11:27:24.086827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:59.707 [2024-11-17 11:27:24.086836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:59.707 [2024-11-17 11:27:24.088351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.707 [2024-11-17 11:27:24.088357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=368892 00:32:59.707 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:59.966 [2024-11-17 11:27:24.535196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:59.966 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:00.224 Malloc0 00:33:00.224 11:27:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:00.789 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.789 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:01.047 [2024-11-17 11:27:25.676577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.047 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:01.304 [2024-11-17 11:27:25.953303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=369175 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 369175 /var/tmp/bdevperf.sock 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369175 ']' 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:01.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.562 11:27:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:01.819 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.819 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:01.819 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:02.076 11:27:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:02.650 Nvme0n1 00:33:02.650 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:02.908 Nvme0n1 00:33:02.908 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:02.908 11:27:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:05.435 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:05.435 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:05.435 11:27:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:05.693 11:27:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:06.631 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:06.631 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:06.631 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.631 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:06.888 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.888 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:06.889 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.889 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.147 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.147 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.147 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.147 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.405 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.405 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.405 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.405 11:27:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:07.664 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.664 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:07.664 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.664 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:07.922 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.922 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:07.922 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.922 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.180 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.180 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:08.180 11:27:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:08.438 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:08.696 11:27:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.072 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.330 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.330 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.330 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.330 11:27:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.588 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.588 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.588 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.588 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.846 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.846 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:10.846 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.846 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.105 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.105 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:11.105 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.105 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.363 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.363 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:11.363 11:27:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:11.621 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:12.188 11:27:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:13.122 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:13.122 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:13.122 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.122 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.379 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.379 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:13.380 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.380 11:27:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:13.638 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.638 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:13.638 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.638 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:13.896 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.896 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:13.896 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.896 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.154 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.154 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:14.154 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.154 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.412 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.412 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.412 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.412 11:27:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:14.670 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.670 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:14.670 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:14.930 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:15.187 11:27:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:16.559 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:16.559 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:16.559 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.559 11:27:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:16.559 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:16.559 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:16.559 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.559 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:16.817 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:16.817 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:16.817 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.817 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.074 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.075 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.075 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.075 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.332 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.332 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.332 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.332 11:27:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.590 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.590 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:17.590 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.590 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:17.848 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:17.848 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:17.848 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:18.105 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:18.363 11:27:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:19.733 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:19.733 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:19.733 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.733 11:27:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.733 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.733 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:19.733 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.733 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.990 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.990 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.990 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.990 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.247 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.247 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.247 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.247 11:27:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.505 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.505 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:20.505 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.505 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:20.762 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.762 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:20.762 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.762 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.019 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:21.019 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:21.019 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:21.277 11:27:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:21.534 11:27:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.905 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.162 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.162 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.162 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.162 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.420 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.420 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.420 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.420 11:27:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.677 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.677 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:23.677 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.677 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.934 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.934 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:23.934 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.935 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.192 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.192 11:27:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:24.449 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:24.449 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:24.710 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:25.276 11:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:26.209 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:26.209 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.209 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.209 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.467 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.467 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:26.467 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.467 11:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.725 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.725 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.725 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.725 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.982 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.982 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.982 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.982 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.240 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.240 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:27.240 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.240 11:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.498 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.498 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:27.498 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.498 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:27.756 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.756 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:27.756 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:28.014 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.272 11:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:29.204 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:29.204 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:29.204 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.204 11:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.769 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.769 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:29.769 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.769 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:30.027 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.027 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:30.027 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.027 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:30.285 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.285 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:30.285 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.285 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.542 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.542 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.542 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.542 11:27:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.800 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.801 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.801 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.801 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:31.059 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.059 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:31.059 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:31.317 11:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:31.574 11:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:32.508 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:32.508 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:32.508 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.508 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.767 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.767 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.767 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.767 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.025 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.025 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.025 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.025 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.283 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.283 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.283 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.283 11:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.541 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.541 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.541 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.541 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:34.107 11:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.683 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:34.683 11:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.056 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.343 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:36.343 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.343 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.343 11:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.650 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.650 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.650 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.650 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.956 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.956 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.956 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.956 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:37.251 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.251 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:37.251 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.251 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 369175 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369175 ']' 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369175 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.522 11:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369175 00:33:37.522 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:37.522 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:37.522 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369175' 00:33:37.522 killing process with pid 369175 00:33:37.522 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369175 00:33:37.522 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369175 00:33:37.522 { 00:33:37.522 "results": [ 00:33:37.522 { 00:33:37.522 "job": "Nvme0n1", 00:33:37.522 "core_mask": "0x4", 00:33:37.522 "workload": "verify", 00:33:37.522 "status": "terminated", 00:33:37.522 "verify_range": { 00:33:37.522 "start": 0, 00:33:37.522 "length": 16384 00:33:37.522 }, 00:33:37.522 "queue_depth": 128, 00:33:37.522 "io_size": 4096, 00:33:37.522 "runtime": 34.301415, 00:33:37.522 "iops": 7969.029849060163, 00:33:37.522 "mibps": 31.12902284789126, 00:33:37.522 "io_failed": 0, 00:33:37.522 "io_timeout": 0, 00:33:37.522 "avg_latency_us": 16034.038339190043, 00:33:37.522 "min_latency_us": 190.38814814814816, 00:33:37.522 "max_latency_us": 4076242.1096296296 00:33:37.522 } 00:33:37.522 ], 00:33:37.522 "core_count": 1 00:33:37.522 } 00:33:37.817 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 369175 00:33:37.817 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:37.817 [2024-11-17 11:27:26.017058] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:37.817 [2024-11-17 11:27:26.017152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369175 ] 00:33:37.817 [2024-11-17 11:27:26.083959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.817 [2024-11-17 11:27:26.135246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:37.817 Running I/O for 90 seconds... 00:33:37.817 8387.00 IOPS, 32.76 MiB/s [2024-11-17T10:28:02.475Z] 8386.50 IOPS, 32.76 MiB/s [2024-11-17T10:28:02.475Z] 8407.00 IOPS, 32.84 MiB/s [2024-11-17T10:28:02.475Z] 8416.00 IOPS, 32.88 MiB/s [2024-11-17T10:28:02.475Z] 8433.00 IOPS, 32.94 MiB/s [2024-11-17T10:28:02.475Z] 8440.50 IOPS, 32.97 MiB/s [2024-11-17T10:28:02.475Z] 8456.86 IOPS, 33.03 MiB/s [2024-11-17T10:28:02.475Z] 8456.38 IOPS, 33.03 MiB/s [2024-11-17T10:28:02.475Z] 8462.56 IOPS, 33.06 MiB/s [2024-11-17T10:28:02.475Z] 8455.30 IOPS, 33.03 MiB/s [2024-11-17T10:28:02.475Z] 8468.64 IOPS, 33.08 MiB/s [2024-11-17T10:28:02.475Z] 8453.67 IOPS, 33.02 MiB/s [2024-11-17T10:28:02.475Z] 8439.38 IOPS, 32.97 MiB/s [2024-11-17T10:28:02.475Z] 8453.07 IOPS, 33.02 MiB/s [2024-11-17T10:28:02.475Z] [2024-11-17 11:27:42.693737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.817 [2024-11-17 11:27:42.693806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.693844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.693863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.693887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.693904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.693927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.693945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.693967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.694001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.694025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.694056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.694080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.694096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.694118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.694134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.817 [2024-11-17 11:27:42.695591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.817 [2024-11-17 11:27:42.695608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.695965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.695986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.696002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.696023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.696039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.696061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.696077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.696098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.696114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.818 [2024-11-17 11:27:42.697911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.818 [2024-11-17 11:27:42.697933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.697950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.697972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.697989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.819 [2024-11-17 11:27:42.698431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.698699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.698715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.819 [2024-11-17 11:27:42.699981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.819 [2024-11-17 11:27:42.699997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.700974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.700996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.701963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.701979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.702001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.702018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.702040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.702058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.702080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.820 [2024-11-17 11:27:42.702097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.820 [2024-11-17 11:27:42.702119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.820 [2024-11-17 11:27:42.702136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.702961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.702985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.821 [2024-11-17 11:27:42.703335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.821 [2024-11-17 11:27:42.703351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.703971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.703987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.822 [2024-11-17 11:27:42.704749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.704777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.704794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.707217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.707243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.707270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.707289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.707311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.822 [2024-11-17 11:27:42.707328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.822 [2024-11-17 11:27:42.707351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.707982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.707999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.823 [2024-11-17 11:27:42.708884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.823 [2024-11-17 11:27:42.708900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.708921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.708937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.708976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.708997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.709378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.709394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.824 8463.60 IOPS, 33.06 MiB/s [2024-11-17T10:28:02.482Z] [2024-11-17 11:27:42.710038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.824 [2024-11-17 11:27:42.710422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.710983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.711020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.711036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.711056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.711071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.824 [2024-11-17 11:27:42.711093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.824 [2024-11-17 11:27:42.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.711950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.711966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.825 [2024-11-17 11:27:42.712324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.825 [2024-11-17 11:27:42.712349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.712692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.712979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.712994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.713014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.713029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.713050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.826 [2024-11-17 11:27:42.713065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.713944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.713969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.713996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.826 [2024-11-17 11:27:42.714779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.826 [2024-11-17 11:27:42.714796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.714833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.714849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.714872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.714889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.714910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.714926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.714947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.714962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.714985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.715966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.715988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.716963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.716980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.717002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.717018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.717040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.827 [2024-11-17 11:27:42.717057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.827 [2024-11-17 11:27:42.717079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.828 [2024-11-17 11:27:42.717298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.717981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.717998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.828 [2024-11-17 11:27:42.718709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.828 [2024-11-17 11:27:42.718725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.718763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.718785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.718801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.718838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.718854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.718876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.718892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.718913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.718929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.718950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.718979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.719550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.719888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.719905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.829 [2024-11-17 11:27:42.720752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.720797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.720843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.720882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.720920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.720959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.720981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.720997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.721019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.721036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.721058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.829 [2024-11-17 11:27:42.721089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.829 [2024-11-17 11:27:42.721112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.721952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.721969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.729978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.729999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.730015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.730036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.830 [2024-11-17 11:27:42.730052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.830 [2024-11-17 11:27:42.730084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.730432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.730448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.831 [2024-11-17 11:27:42.731726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.731981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.731996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.831 [2024-11-17 11:27:42.732403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.831 [2024-11-17 11:27:42.732419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.732950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.732986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.733977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.733997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.734012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.734033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.734047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.734068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.832 [2024-11-17 11:27:42.734082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.832 [2024-11-17 11:27:42.734102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.734117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.734369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.734384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.735271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.833 [2024-11-17 11:27:42.735317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.735957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.735987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.833 [2024-11-17 11:27:42.736547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.833 [2024-11-17 11:27:42.736586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.736965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.736981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.737486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.737502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.834 [2024-11-17 11:27:42.738646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.834 [2024-11-17 11:27:42.738668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.738685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.835 [2024-11-17 11:27:42.738723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.738763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.738802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.738852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.738961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.738987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.739968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.739984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.835 [2024-11-17 11:27:42.740312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.835 [2024-11-17 11:27:42.740328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.740974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.740988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.741025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.741060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.741096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.741131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.741171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.741207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.741244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.741286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.741321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.741365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.741386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.741401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.742338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.742383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.836 [2024-11-17 11:27:42.742423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.836 [2024-11-17 11:27:42.742884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.836 [2024-11-17 11:27:42.742921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.742937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.742959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.742991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.743973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.743989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.837 [2024-11-17 11:27:42.744606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.837 [2024-11-17 11:27:42.744622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.744645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.744662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.838 [2024-11-17 11:27:42.745893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.745969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.745990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.838 [2024-11-17 11:27:42.746765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.838 [2024-11-17 11:27:42.746787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.746803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.746826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.746857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.746880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.746896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.746921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.746938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.746960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.746988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.747970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.747984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.748024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.748059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.748094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.748140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.839 [2024-11-17 11:27:42.748221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.839 [2024-11-17 11:27:42.748241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.748256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.748277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.748291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.748312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.748327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.748348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.748363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.748383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.748399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.749346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.749396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.749436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.840 [2024-11-17 11:27:42.749475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.749961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.749992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.840 [2024-11-17 11:27:42.750784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.840 [2024-11-17 11:27:42.750806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.750823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.750863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.750879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.750899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.750930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.750952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.750971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.750994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.751969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.751990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.841 [2024-11-17 11:27:42.752622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.841 [2024-11-17 11:27:42.752791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.841 [2024-11-17 11:27:42.752831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.752856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.752895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.752911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.752934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.752953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.752977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.752992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.753981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.753996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:37.842 [2024-11-17 11:27:42.754448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.842 [2024-11-17 11:27:42.754463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.754949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.754985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.755033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.755078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.755121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.755164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:42.755208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:42.755255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:42.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:42.755356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:42.755399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:42.755426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:42.755442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:37.843 7934.62 IOPS, 30.99 MiB/s [2024-11-17T10:28:02.501Z] 7467.88 IOPS, 29.17 MiB/s [2024-11-17T10:28:02.501Z] 7053.00 IOPS, 27.55 MiB/s [2024-11-17T10:28:02.501Z] 6681.79 IOPS, 26.10 MiB/s [2024-11-17T10:28:02.501Z] 6747.25 IOPS, 26.36 MiB/s [2024-11-17T10:28:02.501Z] 6831.81 IOPS, 26.69 MiB/s [2024-11-17T10:28:02.501Z] 6945.32 IOPS, 27.13 MiB/s [2024-11-17T10:28:02.501Z] 7132.26 IOPS, 27.86 MiB/s [2024-11-17T10:28:02.501Z] 7307.75 IOPS, 28.55 MiB/s [2024-11-17T10:28:02.501Z] 7444.20 IOPS, 29.08 MiB/s [2024-11-17T10:28:02.501Z] 7485.77 IOPS, 29.24 MiB/s [2024-11-17T10:28:02.501Z] 7519.48 IOPS, 29.37 MiB/s [2024-11-17T10:28:02.501Z] 7554.39 IOPS, 29.51 MiB/s [2024-11-17T10:28:02.501Z] 7645.10 IOPS, 29.86 MiB/s [2024-11-17T10:28:02.501Z] 7752.97 IOPS, 30.29 MiB/s [2024-11-17T10:28:02.501Z] 7866.00 IOPS, 30.73 MiB/s [2024-11-17T10:28:02.501Z] [2024-11-17 11:27:59.280223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:59.280609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:59.280665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.280966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.280983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.281005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.843 [2024-11-17 11:27:59.281022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:37.843 [2024-11-17 11:27:59.281043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.843 [2024-11-17 11:27:59.281059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.281505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.281522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.282972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.282987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.844 [2024-11-17 11:27:59.283024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.283061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.283096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.283133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.844 [2024-11-17 11:27:59.283169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.844 [2024-11-17 11:27:59.283719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.844 [2024-11-17 11:27:59.283764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.844 [2024-11-17 11:27:59.283810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:37.844 [2024-11-17 11:27:59.283833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.844 [2024-11-17 11:27:59.283850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:37.844 7931.03 IOPS, 30.98 MiB/s [2024-11-17T10:28:02.502Z] 7945.12 IOPS, 31.04 MiB/s [2024-11-17T10:28:02.502Z] 7967.94 IOPS, 31.12 MiB/s [2024-11-17T10:28:02.502Z] Received shutdown signal, test time was about 34.302209 seconds 00:33:37.844 00:33:37.844 Latency(us) 00:33:37.844 [2024-11-17T10:28:02.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.844 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:37.844 Verification LBA range: start 0x0 length 0x4000 00:33:37.844 Nvme0n1 : 34.30 7969.03 31.13 0.00 0.00 16034.04 190.39 4076242.11 00:33:37.844 [2024-11-17T10:28:02.502Z] =================================================================================================================== 00:33:37.844 [2024-11-17T10:28:02.502Z] Total : 7969.03 31.13 0.00 0.00 16034.04 190.39 4076242.11 00:33:37.844 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:38.104 rmmod nvme_tcp 00:33:38.104 rmmod nvme_fabrics 00:33:38.104 rmmod nvme_keyring 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 368892 ']' 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 368892 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 368892 ']' 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 368892 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 368892 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 368892' 00:33:38.104 killing process with pid 368892 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 368892 00:33:38.104 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 368892 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:38.363 11:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.266 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:40.266 00:33:40.266 real 0m43.285s 00:33:40.266 user 2m12.005s 00:33:40.266 sys 0m10.787s 00:33:40.266 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.267 11:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:40.267 ************************************ 00:33:40.267 END TEST nvmf_host_multipath_status 00:33:40.267 ************************************ 00:33:40.267 11:28:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:40.267 11:28:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:40.267 11:28:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.267 11:28:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:40.267 ************************************ 00:33:40.267 START TEST nvmf_discovery_remove_ifc 00:33:40.267 ************************************ 00:33:40.267 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:40.526 * Looking for test storage... 00:33:40.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:40.526 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.526 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.526 11:28:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.526 --rc genhtml_branch_coverage=1 00:33:40.526 --rc genhtml_function_coverage=1 00:33:40.526 --rc genhtml_legend=1 00:33:40.526 --rc geninfo_all_blocks=1 00:33:40.526 --rc geninfo_unexecuted_blocks=1 00:33:40.526 00:33:40.526 ' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.526 --rc genhtml_branch_coverage=1 00:33:40.526 --rc genhtml_function_coverage=1 00:33:40.526 --rc genhtml_legend=1 00:33:40.526 --rc geninfo_all_blocks=1 00:33:40.526 --rc geninfo_unexecuted_blocks=1 00:33:40.526 00:33:40.526 ' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.526 --rc genhtml_branch_coverage=1 00:33:40.526 --rc genhtml_function_coverage=1 00:33:40.526 --rc genhtml_legend=1 00:33:40.526 --rc geninfo_all_blocks=1 00:33:40.526 --rc geninfo_unexecuted_blocks=1 00:33:40.526 00:33:40.526 ' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:40.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.526 --rc genhtml_branch_coverage=1 00:33:40.526 --rc genhtml_function_coverage=1 00:33:40.526 --rc genhtml_legend=1 00:33:40.526 --rc geninfo_all_blocks=1 00:33:40.526 --rc geninfo_unexecuted_blocks=1 00:33:40.526 00:33:40.526 ' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.526 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:40.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.527 11:28:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:43.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:43.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:43.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:43.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:43.057 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:43.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:43.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:33:43.057 00:33:43.058 --- 10.0.0.2 ping statistics --- 00:33:43.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.058 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:43.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:43.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:33:43.058 00:33:43.058 --- 10.0.0.1 ping statistics --- 00:33:43.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:43.058 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=375542 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 375542 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 375542 ']' 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.058 [2024-11-17 11:28:07.353114] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:43.058 [2024-11-17 11:28:07.353188] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.058 [2024-11-17 11:28:07.424470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.058 [2024-11-17 11:28:07.469753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.058 [2024-11-17 11:28:07.469817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.058 [2024-11-17 11:28:07.469848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.058 [2024-11-17 11:28:07.469862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.058 [2024-11-17 11:28:07.469872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.058 [2024-11-17 11:28:07.470544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.058 [2024-11-17 11:28:07.622863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.058 [2024-11-17 11:28:07.631074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:43.058 null0 00:33:43.058 [2024-11-17 11:28:07.662960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=375677 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 375677 /tmp/host.sock 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 375677 ']' 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:43.058 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:43.058 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.317 [2024-11-17 11:28:07.728675] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:33:43.317 [2024-11-17 11:28:07.728754] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375677 ] 00:33:43.317 [2024-11-17 11:28:07.793715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.317 [2024-11-17 11:28:07.838646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.317 11:28:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.576 11:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.576 11:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:43.576 11:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.576 11:28:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.509 [2024-11-17 11:28:09.110628] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:44.509 [2024-11-17 11:28:09.110652] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:44.509 [2024-11-17 11:28:09.110679] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:44.765 [2024-11-17 11:28:09.196986] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:44.765 [2024-11-17 11:28:09.379165] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:44.765 [2024-11-17 11:28:09.380190] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d5bc00:1 started. 00:33:44.765 [2024-11-17 11:28:09.381843] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:44.765 [2024-11-17 11:28:09.381897] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:44.765 [2024-11-17 11:28:09.381927] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:44.765 [2024-11-17 11:28:09.381947] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:44.765 [2024-11-17 11:28:09.381969] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.765 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:45.022 11:28:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:45.955 11:28:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:47.329 11:28:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:48.262 11:28:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:49.196 11:28:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:50.129 11:28:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:50.388 [2024-11-17 11:28:14.823963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:50.388 [2024-11-17 11:28:14.824040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.388 [2024-11-17 11:28:14.824063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.824081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.388 [2024-11-17 11:28:14.824093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.824106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.388 [2024-11-17 11:28:14.824126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.824140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.388 [2024-11-17 11:28:14.824152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.824165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:50.388 [2024-11-17 11:28:14.824176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.824188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(6) to be set 00:33:50.388 [2024-11-17 11:28:14.833980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d38400 (9): Bad file descriptor 00:33:50.388 [2024-11-17 11:28:14.844025] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:50.388 [2024-11-17 11:28:14.844051] bdev_nvme.c:2342:bdev_nvme_reset_destroy_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting qpair 0x1d5bc00:1. 00:33:50.388 [2024-11-17 11:28:14.844108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.388 [2024-11-17 11:28:14.844129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.844165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:64 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.388 [2024-11-17 11:28:14.844180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.844197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.388 [2024-11-17 11:28:14.844210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:50.388 [2024-11-17 11:28:14.844318] bdev_nvme.c:1776:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d5bc00 was disconnected and freed in a reset ctrlr sequence. 00:33:50.388 [2024-11-17 11:28:14.844337] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:50.388 [2024-11-17 11:28:14.844347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:50.388 [2024-11-17 11:28:14.844356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:50.388 [2024-11-17 11:28:14.844390] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:51.322 [2024-11-17 11:28:15.872556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:51.322 [2024-11-17 11:28:15.872616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d38400 with addr=10.0.0.2, port=4420 00:33:51.322 [2024-11-17 11:28:15.872645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38400 is same with the state(6) to be set 00:33:51.322 [2024-11-17 11:28:15.872692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d38400 (9): Bad file descriptor 00:33:51.322 [2024-11-17 11:28:15.873119] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:51.322 [2024-11-17 11:28:15.873176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:51.322 [2024-11-17 11:28:15.873195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:51.322 [2024-11-17 11:28:15.873211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:51.322 [2024-11-17 11:28:15.873223] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:51.322 [2024-11-17 11:28:15.873233] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:51.322 [2024-11-17 11:28:15.873241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:51.322 [2024-11-17 11:28:15.873255] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:51.322 [2024-11-17 11:28:15.873263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:51.322 11:28:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:52.255 [2024-11-17 11:28:16.875671] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev nvme0n1: Input/output error 00:33:52.255 [2024-11-17 11:28:16.875715] vbdev_gpt.c: 467:gpt_bdev_complete: *ERROR*: Gpt: bdev=nvme0n1 io error 00:33:52.255 [2024-11-17 11:28:16.875881] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:52.255 [2024-11-17 11:28:16.875954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:52.255 [2024-11-17 11:28:16.875974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:52.255 [2024-11-17 11:28:16.875986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:52.255 [2024-11-17 11:28:16.876000] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:52.255 [2024-11-17 11:28:16.876012] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:52.255 [2024-11-17 11:28:16.876021] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:52.255 [2024-11-17 11:28:16.876028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:52.255 [2024-11-17 11:28:16.876067] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:52.255 [2024-11-17 11:28:16.876119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.255 [2024-11-17 11:28:16.876140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.255 [2024-11-17 11:28:16.876159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.255 [2024-11-17 11:28:16.876174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.255 [2024-11-17 11:28:16.876192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.255 [2024-11-17 11:28:16.876206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.255 [2024-11-17 11:28:16.876220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.255 [2024-11-17 11:28:16.876248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.255 [2024-11-17 11:28:16.876262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:52.255 [2024-11-17 11:28:16.876274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:52.255 [2024-11-17 11:28:16.876287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:52.255 [2024-11-17 11:28:16.876795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d27b40 (9): Bad file descriptor 00:33:52.255 [2024-11-17 11:28:16.877820] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:52.255 [2024-11-17 11:28:16.877856] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:52.255 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:52.255 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.255 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:52.255 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:52.256 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.256 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.256 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:52.256 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:52.513 11:28:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.513 11:28:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:52.513 11:28:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:53.447 11:28:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:54.380 [2024-11-17 11:28:18.936206] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:54.380 [2024-11-17 11:28:18.936238] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:54.380 [2024-11-17 11:28:18.936260] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:54.648 [2024-11-17 11:28:19.063677] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:54.648 11:28:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:54.648 [2024-11-17 11:28:19.125330] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:54.648 [2024-11-17 11:28:19.126115] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1d3ae80:1 started. 00:33:54.648 [2024-11-17 11:28:19.127454] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:54.648 [2024-11-17 11:28:19.127497] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:54.648 [2024-11-17 11:28:19.127549] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:54.648 [2024-11-17 11:28:19.127573] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:54.648 [2024-11-17 11:28:19.127597] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:54.648 [2024-11-17 11:28:19.135164] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1d3ae80 was disconnected and freed. delete nvme_qpair. 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 375677 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 375677 ']' 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 375677 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375677 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375677' 00:33:55.626 killing process with pid 375677 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 375677 00:33:55.626 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 375677 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.885 rmmod nvme_tcp 00:33:55.885 rmmod nvme_fabrics 00:33:55.885 rmmod nvme_keyring 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 375542 ']' 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 375542 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 375542 ']' 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 375542 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375542 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375542' 00:33:55.885 killing process with pid 375542 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 375542 00:33:55.885 [2024-11-17 11:28:20.471415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380060 is same with the state(6) to be set 00:33:55.885 [2024-11-17 11:28:20.471469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2380060 is same with the state(6) to be set 00:33:55.885 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 375542 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.143 11:28:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.680 00:33:58.680 real 0m17.829s 00:33:58.680 user 0m25.522s 00:33:58.680 sys 0m3.329s 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:58.680 ************************************ 00:33:58.680 END TEST nvmf_discovery_remove_ifc 00:33:58.680 ************************************ 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.680 ************************************ 00:33:58.680 START TEST nvmf_identify_kernel_target 00:33:58.680 ************************************ 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:58.680 * Looking for test storage... 00:33:58.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.680 --rc genhtml_branch_coverage=1 00:33:58.680 --rc genhtml_function_coverage=1 00:33:58.680 --rc genhtml_legend=1 00:33:58.680 --rc geninfo_all_blocks=1 00:33:58.680 --rc geninfo_unexecuted_blocks=1 00:33:58.680 00:33:58.680 ' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.680 --rc genhtml_branch_coverage=1 00:33:58.680 --rc genhtml_function_coverage=1 00:33:58.680 --rc genhtml_legend=1 00:33:58.680 --rc geninfo_all_blocks=1 00:33:58.680 --rc geninfo_unexecuted_blocks=1 00:33:58.680 00:33:58.680 ' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.680 --rc genhtml_branch_coverage=1 00:33:58.680 --rc genhtml_function_coverage=1 00:33:58.680 --rc genhtml_legend=1 00:33:58.680 --rc geninfo_all_blocks=1 00:33:58.680 --rc geninfo_unexecuted_blocks=1 00:33:58.680 00:33:58.680 ' 00:33:58.680 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:58.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.680 --rc genhtml_branch_coverage=1 00:33:58.680 --rc genhtml_function_coverage=1 00:33:58.680 --rc genhtml_legend=1 00:33:58.680 --rc geninfo_all_blocks=1 00:33:58.680 --rc geninfo_unexecuted_blocks=1 00:33:58.681 00:33:58.681 ' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.681 11:28:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:00.583 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:00.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:00.583 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.583 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:00.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.584 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:34:00.844 00:34:00.844 --- 10.0.0.2 ping statistics --- 00:34:00.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.844 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:34:00.844 00:34:00.844 --- 10.0.0.1 ping statistics --- 00:34:00.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.844 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:00.844 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:00.845 11:28:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:02.219 Waiting for block devices as requested 00:34:02.219 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:02.219 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:02.219 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:02.219 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:02.478 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:02.478 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:02.478 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:02.478 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:02.737 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:02.737 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:02.737 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:02.996 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:02.996 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:02.996 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:02.996 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:03.254 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:03.254 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:03.254 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:03.512 No valid GPT data, bailing 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:03.512 11:28:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:03.512 00:34:03.512 Discovery Log Number of Records 2, Generation counter 2 00:34:03.512 =====Discovery Log Entry 0====== 00:34:03.512 trtype: tcp 00:34:03.512 adrfam: ipv4 00:34:03.512 subtype: current discovery subsystem 00:34:03.512 treq: not specified, sq flow control disable supported 00:34:03.512 portid: 1 00:34:03.512 trsvcid: 4420 00:34:03.512 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:03.512 traddr: 10.0.0.1 00:34:03.512 eflags: none 00:34:03.512 sectype: none 00:34:03.512 =====Discovery Log Entry 1====== 00:34:03.512 trtype: tcp 00:34:03.512 adrfam: ipv4 00:34:03.512 subtype: nvme subsystem 00:34:03.512 treq: not specified, sq flow control disable supported 00:34:03.512 portid: 1 00:34:03.512 trsvcid: 4420 00:34:03.512 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:03.512 traddr: 10.0.0.1 00:34:03.512 eflags: none 00:34:03.512 sectype: none 00:34:03.512 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:03.512 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:03.772 ===================================================== 00:34:03.772 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:03.772 ===================================================== 00:34:03.772 Controller Capabilities/Features 00:34:03.772 ================================ 00:34:03.772 Vendor ID: 0000 00:34:03.772 Subsystem Vendor ID: 0000 00:34:03.772 Serial Number: a3aae25e2e09819cc962 00:34:03.772 Model Number: Linux 00:34:03.772 Firmware Version: 6.8.9-20 00:34:03.772 Recommended Arb Burst: 0 00:34:03.772 IEEE OUI Identifier: 00 00 00 00:34:03.772 Multi-path I/O 00:34:03.772 May have multiple subsystem ports: No 00:34:03.772 May have multiple controllers: No 00:34:03.772 Associated with SR-IOV VF: No 00:34:03.772 Max Data Transfer Size: Unlimited 00:34:03.772 Max Number of Namespaces: 0 00:34:03.772 Max Number of I/O Queues: 1024 00:34:03.772 NVMe Specification Version (VS): 1.3 00:34:03.772 NVMe Specification Version (Identify): 1.3 00:34:03.772 Maximum Queue Entries: 1024 00:34:03.772 Contiguous Queues Required: No 00:34:03.772 Arbitration Mechanisms Supported 00:34:03.772 Weighted Round Robin: Not Supported 00:34:03.772 Vendor Specific: Not Supported 00:34:03.772 Reset Timeout: 7500 ms 00:34:03.772 Doorbell Stride: 4 bytes 00:34:03.772 NVM Subsystem Reset: Not Supported 00:34:03.772 Command Sets Supported 00:34:03.772 NVM Command Set: Supported 00:34:03.772 Boot Partition: Not Supported 00:34:03.772 Memory Page Size Minimum: 4096 bytes 00:34:03.772 Memory Page Size Maximum: 4096 bytes 00:34:03.772 Persistent Memory Region: Not Supported 00:34:03.772 Optional Asynchronous Events Supported 00:34:03.772 Namespace Attribute Notices: Not Supported 00:34:03.772 Firmware Activation Notices: Not Supported 00:34:03.772 ANA Change Notices: Not Supported 00:34:03.772 PLE Aggregate Log Change Notices: Not Supported 00:34:03.772 LBA Status Info Alert Notices: Not Supported 00:34:03.772 EGE Aggregate Log Change Notices: Not Supported 00:34:03.772 Normal NVM Subsystem Shutdown event: Not Supported 00:34:03.772 Zone Descriptor Change Notices: Not Supported 00:34:03.772 Discovery Log Change Notices: Supported 00:34:03.772 Controller Attributes 00:34:03.772 128-bit Host Identifier: Not Supported 00:34:03.772 Non-Operational Permissive Mode: Not Supported 00:34:03.772 NVM Sets: Not Supported 00:34:03.772 Read Recovery Levels: Not Supported 00:34:03.772 Endurance Groups: Not Supported 00:34:03.772 Predictable Latency Mode: Not Supported 00:34:03.772 Traffic Based Keep ALive: Not Supported 00:34:03.772 Namespace Granularity: Not Supported 00:34:03.772 SQ Associations: Not Supported 00:34:03.772 UUID List: Not Supported 00:34:03.772 Multi-Domain Subsystem: Not Supported 00:34:03.772 Fixed Capacity Management: Not Supported 00:34:03.772 Variable Capacity Management: Not Supported 00:34:03.772 Delete Endurance Group: Not Supported 00:34:03.772 Delete NVM Set: Not Supported 00:34:03.772 Extended LBA Formats Supported: Not Supported 00:34:03.772 Flexible Data Placement Supported: Not Supported 00:34:03.772 00:34:03.772 Controller Memory Buffer Support 00:34:03.772 ================================ 00:34:03.772 Supported: No 00:34:03.772 00:34:03.772 Persistent Memory Region Support 00:34:03.772 ================================ 00:34:03.772 Supported: No 00:34:03.772 00:34:03.772 Admin Command Set Attributes 00:34:03.772 ============================ 00:34:03.772 Security Send/Receive: Not Supported 00:34:03.772 Format NVM: Not Supported 00:34:03.772 Firmware Activate/Download: Not Supported 00:34:03.772 Namespace Management: Not Supported 00:34:03.772 Device Self-Test: Not Supported 00:34:03.772 Directives: Not Supported 00:34:03.772 NVMe-MI: Not Supported 00:34:03.772 Virtualization Management: Not Supported 00:34:03.772 Doorbell Buffer Config: Not Supported 00:34:03.772 Get LBA Status Capability: Not Supported 00:34:03.772 Command & Feature Lockdown Capability: Not Supported 00:34:03.772 Abort Command Limit: 1 00:34:03.772 Async Event Request Limit: 1 00:34:03.772 Number of Firmware Slots: N/A 00:34:03.772 Firmware Slot 1 Read-Only: N/A 00:34:03.772 Firmware Activation Without Reset: N/A 00:34:03.772 Multiple Update Detection Support: N/A 00:34:03.772 Firmware Update Granularity: No Information Provided 00:34:03.772 Per-Namespace SMART Log: No 00:34:03.772 Asymmetric Namespace Access Log Page: Not Supported 00:34:03.772 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:03.772 Command Effects Log Page: Not Supported 00:34:03.772 Get Log Page Extended Data: Supported 00:34:03.772 Telemetry Log Pages: Not Supported 00:34:03.772 Persistent Event Log Pages: Not Supported 00:34:03.772 Supported Log Pages Log Page: May Support 00:34:03.772 Commands Supported & Effects Log Page: Not Supported 00:34:03.772 Feature Identifiers & Effects Log Page:May Support 00:34:03.772 NVMe-MI Commands & Effects Log Page: May Support 00:34:03.772 Data Area 4 for Telemetry Log: Not Supported 00:34:03.772 Error Log Page Entries Supported: 1 00:34:03.772 Keep Alive: Not Supported 00:34:03.772 00:34:03.773 NVM Command Set Attributes 00:34:03.773 ========================== 00:34:03.773 Submission Queue Entry Size 00:34:03.773 Max: 1 00:34:03.773 Min: 1 00:34:03.773 Completion Queue Entry Size 00:34:03.773 Max: 1 00:34:03.773 Min: 1 00:34:03.773 Number of Namespaces: 0 00:34:03.773 Compare Command: Not Supported 00:34:03.773 Write Uncorrectable Command: Not Supported 00:34:03.773 Dataset Management Command: Not Supported 00:34:03.773 Write Zeroes Command: Not Supported 00:34:03.773 Set Features Save Field: Not Supported 00:34:03.773 Reservations: Not Supported 00:34:03.773 Timestamp: Not Supported 00:34:03.773 Copy: Not Supported 00:34:03.773 Volatile Write Cache: Not Present 00:34:03.773 Atomic Write Unit (Normal): 1 00:34:03.773 Atomic Write Unit (PFail): 1 00:34:03.773 Atomic Compare & Write Unit: 1 00:34:03.773 Fused Compare & Write: Not Supported 00:34:03.773 Scatter-Gather List 00:34:03.773 SGL Command Set: Supported 00:34:03.773 SGL Keyed: Not Supported 00:34:03.773 SGL Bit Bucket Descriptor: Not Supported 00:34:03.773 SGL Metadata Pointer: Not Supported 00:34:03.773 Oversized SGL: Not Supported 00:34:03.773 SGL Metadata Address: Not Supported 00:34:03.773 SGL Offset: Supported 00:34:03.773 Transport SGL Data Block: Not Supported 00:34:03.773 Replay Protected Memory Block: Not Supported 00:34:03.773 00:34:03.773 Firmware Slot Information 00:34:03.773 ========================= 00:34:03.773 Active slot: 0 00:34:03.773 00:34:03.773 00:34:03.773 Error Log 00:34:03.773 ========= 00:34:03.773 00:34:03.773 Active Namespaces 00:34:03.773 ================= 00:34:03.773 Discovery Log Page 00:34:03.773 ================== 00:34:03.773 Generation Counter: 2 00:34:03.773 Number of Records: 2 00:34:03.773 Record Format: 0 00:34:03.773 00:34:03.773 Discovery Log Entry 0 00:34:03.773 ---------------------- 00:34:03.773 Transport Type: 3 (TCP) 00:34:03.773 Address Family: 1 (IPv4) 00:34:03.773 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:03.773 Entry Flags: 00:34:03.773 Duplicate Returned Information: 0 00:34:03.773 Explicit Persistent Connection Support for Discovery: 0 00:34:03.773 Transport Requirements: 00:34:03.773 Secure Channel: Not Specified 00:34:03.773 Port ID: 1 (0x0001) 00:34:03.773 Controller ID: 65535 (0xffff) 00:34:03.773 Admin Max SQ Size: 32 00:34:03.773 Transport Service Identifier: 4420 00:34:03.773 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:03.773 Transport Address: 10.0.0.1 00:34:03.773 Discovery Log Entry 1 00:34:03.773 ---------------------- 00:34:03.773 Transport Type: 3 (TCP) 00:34:03.773 Address Family: 1 (IPv4) 00:34:03.773 Subsystem Type: 2 (NVM Subsystem) 00:34:03.773 Entry Flags: 00:34:03.773 Duplicate Returned Information: 0 00:34:03.773 Explicit Persistent Connection Support for Discovery: 0 00:34:03.773 Transport Requirements: 00:34:03.773 Secure Channel: Not Specified 00:34:03.773 Port ID: 1 (0x0001) 00:34:03.773 Controller ID: 65535 (0xffff) 00:34:03.773 Admin Max SQ Size: 32 00:34:03.773 Transport Service Identifier: 4420 00:34:03.773 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:03.773 Transport Address: 10.0.0.1 00:34:03.773 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:03.773 get_feature(0x01) failed 00:34:03.773 get_feature(0x02) failed 00:34:03.773 get_feature(0x04) failed 00:34:03.773 ===================================================== 00:34:03.773 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:03.773 ===================================================== 00:34:03.773 Controller Capabilities/Features 00:34:03.773 ================================ 00:34:03.773 Vendor ID: 0000 00:34:03.773 Subsystem Vendor ID: 0000 00:34:03.773 Serial Number: a2b7b545fcb999633f04 00:34:03.773 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:03.773 Firmware Version: 6.8.9-20 00:34:03.773 Recommended Arb Burst: 6 00:34:03.773 IEEE OUI Identifier: 00 00 00 00:34:03.773 Multi-path I/O 00:34:03.773 May have multiple subsystem ports: Yes 00:34:03.773 May have multiple controllers: Yes 00:34:03.773 Associated with SR-IOV VF: No 00:34:03.773 Max Data Transfer Size: Unlimited 00:34:03.773 Max Number of Namespaces: 1024 00:34:03.773 Max Number of I/O Queues: 128 00:34:03.773 NVMe Specification Version (VS): 1.3 00:34:03.773 NVMe Specification Version (Identify): 1.3 00:34:03.773 Maximum Queue Entries: 1024 00:34:03.773 Contiguous Queues Required: No 00:34:03.773 Arbitration Mechanisms Supported 00:34:03.773 Weighted Round Robin: Not Supported 00:34:03.773 Vendor Specific: Not Supported 00:34:03.773 Reset Timeout: 7500 ms 00:34:03.773 Doorbell Stride: 4 bytes 00:34:03.773 NVM Subsystem Reset: Not Supported 00:34:03.773 Command Sets Supported 00:34:03.773 NVM Command Set: Supported 00:34:03.773 Boot Partition: Not Supported 00:34:03.773 Memory Page Size Minimum: 4096 bytes 00:34:03.773 Memory Page Size Maximum: 4096 bytes 00:34:03.773 Persistent Memory Region: Not Supported 00:34:03.773 Optional Asynchronous Events Supported 00:34:03.773 Namespace Attribute Notices: Supported 00:34:03.773 Firmware Activation Notices: Not Supported 00:34:03.773 ANA Change Notices: Supported 00:34:03.773 PLE Aggregate Log Change Notices: Not Supported 00:34:03.773 LBA Status Info Alert Notices: Not Supported 00:34:03.773 EGE Aggregate Log Change Notices: Not Supported 00:34:03.773 Normal NVM Subsystem Shutdown event: Not Supported 00:34:03.773 Zone Descriptor Change Notices: Not Supported 00:34:03.773 Discovery Log Change Notices: Not Supported 00:34:03.773 Controller Attributes 00:34:03.773 128-bit Host Identifier: Supported 00:34:03.773 Non-Operational Permissive Mode: Not Supported 00:34:03.773 NVM Sets: Not Supported 00:34:03.773 Read Recovery Levels: Not Supported 00:34:03.773 Endurance Groups: Not Supported 00:34:03.773 Predictable Latency Mode: Not Supported 00:34:03.773 Traffic Based Keep ALive: Supported 00:34:03.773 Namespace Granularity: Not Supported 00:34:03.773 SQ Associations: Not Supported 00:34:03.773 UUID List: Not Supported 00:34:03.773 Multi-Domain Subsystem: Not Supported 00:34:03.773 Fixed Capacity Management: Not Supported 00:34:03.773 Variable Capacity Management: Not Supported 00:34:03.773 Delete Endurance Group: Not Supported 00:34:03.773 Delete NVM Set: Not Supported 00:34:03.773 Extended LBA Formats Supported: Not Supported 00:34:03.773 Flexible Data Placement Supported: Not Supported 00:34:03.773 00:34:03.773 Controller Memory Buffer Support 00:34:03.773 ================================ 00:34:03.773 Supported: No 00:34:03.773 00:34:03.773 Persistent Memory Region Support 00:34:03.773 ================================ 00:34:03.773 Supported: No 00:34:03.773 00:34:03.773 Admin Command Set Attributes 00:34:03.773 ============================ 00:34:03.773 Security Send/Receive: Not Supported 00:34:03.773 Format NVM: Not Supported 00:34:03.773 Firmware Activate/Download: Not Supported 00:34:03.773 Namespace Management: Not Supported 00:34:03.773 Device Self-Test: Not Supported 00:34:03.773 Directives: Not Supported 00:34:03.773 NVMe-MI: Not Supported 00:34:03.773 Virtualization Management: Not Supported 00:34:03.773 Doorbell Buffer Config: Not Supported 00:34:03.773 Get LBA Status Capability: Not Supported 00:34:03.773 Command & Feature Lockdown Capability: Not Supported 00:34:03.773 Abort Command Limit: 4 00:34:03.773 Async Event Request Limit: 4 00:34:03.773 Number of Firmware Slots: N/A 00:34:03.773 Firmware Slot 1 Read-Only: N/A 00:34:03.773 Firmware Activation Without Reset: N/A 00:34:03.773 Multiple Update Detection Support: N/A 00:34:03.773 Firmware Update Granularity: No Information Provided 00:34:03.773 Per-Namespace SMART Log: Yes 00:34:03.773 Asymmetric Namespace Access Log Page: Supported 00:34:03.773 ANA Transition Time : 10 sec 00:34:03.773 00:34:03.773 Asymmetric Namespace Access Capabilities 00:34:03.773 ANA Optimized State : Supported 00:34:03.773 ANA Non-Optimized State : Supported 00:34:03.773 ANA Inaccessible State : Supported 00:34:03.773 ANA Persistent Loss State : Supported 00:34:03.773 ANA Change State : Supported 00:34:03.773 ANAGRPID is not changed : No 00:34:03.773 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:03.773 00:34:03.773 ANA Group Identifier Maximum : 128 00:34:03.773 Number of ANA Group Identifiers : 128 00:34:03.773 Max Number of Allowed Namespaces : 1024 00:34:03.773 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:03.773 Command Effects Log Page: Supported 00:34:03.774 Get Log Page Extended Data: Supported 00:34:03.774 Telemetry Log Pages: Not Supported 00:34:03.774 Persistent Event Log Pages: Not Supported 00:34:03.774 Supported Log Pages Log Page: May Support 00:34:03.774 Commands Supported & Effects Log Page: Not Supported 00:34:03.774 Feature Identifiers & Effects Log Page:May Support 00:34:03.774 NVMe-MI Commands & Effects Log Page: May Support 00:34:03.774 Data Area 4 for Telemetry Log: Not Supported 00:34:03.774 Error Log Page Entries Supported: 128 00:34:03.774 Keep Alive: Supported 00:34:03.774 Keep Alive Granularity: 1000 ms 00:34:03.774 00:34:03.774 NVM Command Set Attributes 00:34:03.774 ========================== 00:34:03.774 Submission Queue Entry Size 00:34:03.774 Max: 64 00:34:03.774 Min: 64 00:34:03.774 Completion Queue Entry Size 00:34:03.774 Max: 16 00:34:03.774 Min: 16 00:34:03.774 Number of Namespaces: 1024 00:34:03.774 Compare Command: Not Supported 00:34:03.774 Write Uncorrectable Command: Not Supported 00:34:03.774 Dataset Management Command: Supported 00:34:03.774 Write Zeroes Command: Supported 00:34:03.774 Set Features Save Field: Not Supported 00:34:03.774 Reservations: Not Supported 00:34:03.774 Timestamp: Not Supported 00:34:03.774 Copy: Not Supported 00:34:03.774 Volatile Write Cache: Present 00:34:03.774 Atomic Write Unit (Normal): 1 00:34:03.774 Atomic Write Unit (PFail): 1 00:34:03.774 Atomic Compare & Write Unit: 1 00:34:03.774 Fused Compare & Write: Not Supported 00:34:03.774 Scatter-Gather List 00:34:03.774 SGL Command Set: Supported 00:34:03.774 SGL Keyed: Not Supported 00:34:03.774 SGL Bit Bucket Descriptor: Not Supported 00:34:03.774 SGL Metadata Pointer: Not Supported 00:34:03.774 Oversized SGL: Not Supported 00:34:03.774 SGL Metadata Address: Not Supported 00:34:03.774 SGL Offset: Supported 00:34:03.774 Transport SGL Data Block: Not Supported 00:34:03.774 Replay Protected Memory Block: Not Supported 00:34:03.774 00:34:03.774 Firmware Slot Information 00:34:03.774 ========================= 00:34:03.774 Active slot: 0 00:34:03.774 00:34:03.774 Asymmetric Namespace Access 00:34:03.774 =========================== 00:34:03.774 Change Count : 0 00:34:03.774 Number of ANA Group Descriptors : 1 00:34:03.774 ANA Group Descriptor : 0 00:34:03.774 ANA Group ID : 1 00:34:03.774 Number of NSID Values : 1 00:34:03.774 Change Count : 0 00:34:03.774 ANA State : 1 00:34:03.774 Namespace Identifier : 1 00:34:03.774 00:34:03.774 Commands Supported and Effects 00:34:03.774 ============================== 00:34:03.774 Admin Commands 00:34:03.774 -------------- 00:34:03.774 Get Log Page (02h): Supported 00:34:03.774 Identify (06h): Supported 00:34:03.774 Abort (08h): Supported 00:34:03.774 Set Features (09h): Supported 00:34:03.774 Get Features (0Ah): Supported 00:34:03.774 Asynchronous Event Request (0Ch): Supported 00:34:03.774 Keep Alive (18h): Supported 00:34:03.774 I/O Commands 00:34:03.774 ------------ 00:34:03.774 Flush (00h): Supported 00:34:03.774 Write (01h): Supported LBA-Change 00:34:03.774 Read (02h): Supported 00:34:03.774 Write Zeroes (08h): Supported LBA-Change 00:34:03.774 Dataset Management (09h): Supported 00:34:03.774 00:34:03.774 Error Log 00:34:03.774 ========= 00:34:03.774 Entry: 0 00:34:03.774 Error Count: 0x3 00:34:03.774 Submission Queue Id: 0x0 00:34:03.774 Command Id: 0x5 00:34:03.774 Phase Bit: 0 00:34:03.774 Status Code: 0x2 00:34:03.774 Status Code Type: 0x0 00:34:03.774 Do Not Retry: 1 00:34:03.774 Error Location: 0x28 00:34:03.774 LBA: 0x0 00:34:03.774 Namespace: 0x0 00:34:03.774 Vendor Log Page: 0x0 00:34:03.774 ----------- 00:34:03.774 Entry: 1 00:34:03.774 Error Count: 0x2 00:34:03.774 Submission Queue Id: 0x0 00:34:03.774 Command Id: 0x5 00:34:03.774 Phase Bit: 0 00:34:03.774 Status Code: 0x2 00:34:03.774 Status Code Type: 0x0 00:34:03.774 Do Not Retry: 1 00:34:03.774 Error Location: 0x28 00:34:03.774 LBA: 0x0 00:34:03.774 Namespace: 0x0 00:34:03.774 Vendor Log Page: 0x0 00:34:03.774 ----------- 00:34:03.774 Entry: 2 00:34:03.774 Error Count: 0x1 00:34:03.774 Submission Queue Id: 0x0 00:34:03.774 Command Id: 0x4 00:34:03.774 Phase Bit: 0 00:34:03.774 Status Code: 0x2 00:34:03.774 Status Code Type: 0x0 00:34:03.774 Do Not Retry: 1 00:34:03.774 Error Location: 0x28 00:34:03.774 LBA: 0x0 00:34:03.774 Namespace: 0x0 00:34:03.774 Vendor Log Page: 0x0 00:34:03.774 00:34:03.774 Number of Queues 00:34:03.774 ================ 00:34:03.774 Number of I/O Submission Queues: 128 00:34:03.774 Number of I/O Completion Queues: 128 00:34:03.774 00:34:03.774 ZNS Specific Controller Data 00:34:03.774 ============================ 00:34:03.774 Zone Append Size Limit: 0 00:34:03.774 00:34:03.774 00:34:03.774 Active Namespaces 00:34:03.774 ================= 00:34:03.774 get_feature(0x05) failed 00:34:03.774 Namespace ID:1 00:34:03.774 Command Set Identifier: NVM (00h) 00:34:03.774 Deallocate: Supported 00:34:03.774 Deallocated/Unwritten Error: Not Supported 00:34:03.774 Deallocated Read Value: Unknown 00:34:03.774 Deallocate in Write Zeroes: Not Supported 00:34:03.774 Deallocated Guard Field: 0xFFFF 00:34:03.774 Flush: Supported 00:34:03.774 Reservation: Not Supported 00:34:03.774 Namespace Sharing Capabilities: Multiple Controllers 00:34:03.774 Size (in LBAs): 1953525168 (931GiB) 00:34:03.774 Capacity (in LBAs): 1953525168 (931GiB) 00:34:03.774 Utilization (in LBAs): 1953525168 (931GiB) 00:34:03.774 UUID: 4977f3b5-ef83-4cae-a0ef-4f3743febaf9 00:34:03.774 Thin Provisioning: Not Supported 00:34:03.774 Per-NS Atomic Units: Yes 00:34:03.774 Atomic Boundary Size (Normal): 0 00:34:03.774 Atomic Boundary Size (PFail): 0 00:34:03.774 Atomic Boundary Offset: 0 00:34:03.774 NGUID/EUI64 Never Reused: No 00:34:03.774 ANA group ID: 1 00:34:03.774 Namespace Write Protected: No 00:34:03.774 Number of LBA Formats: 1 00:34:03.774 Current LBA Format: LBA Format #00 00:34:03.774 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:03.774 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.774 rmmod nvme_tcp 00:34:03.774 rmmod nvme_fabrics 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.774 11:28:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:06.314 11:28:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:07.250 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:07.250 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:07.250 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:08.187 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:08.446 00:34:08.446 real 0m10.064s 00:34:08.446 user 0m2.141s 00:34:08.446 sys 0m3.857s 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:08.446 ************************************ 00:34:08.446 END TEST nvmf_identify_kernel_target 00:34:08.446 ************************************ 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.446 ************************************ 00:34:08.446 START TEST nvmf_auth_host 00:34:08.446 ************************************ 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:08.446 * Looking for test storage... 00:34:08.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.446 11:28:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.446 --rc genhtml_branch_coverage=1 00:34:08.446 --rc genhtml_function_coverage=1 00:34:08.446 --rc genhtml_legend=1 00:34:08.446 --rc geninfo_all_blocks=1 00:34:08.446 --rc geninfo_unexecuted_blocks=1 00:34:08.446 00:34:08.446 ' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.446 --rc genhtml_branch_coverage=1 00:34:08.446 --rc genhtml_function_coverage=1 00:34:08.446 --rc genhtml_legend=1 00:34:08.446 --rc geninfo_all_blocks=1 00:34:08.446 --rc geninfo_unexecuted_blocks=1 00:34:08.446 00:34:08.446 ' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.446 --rc genhtml_branch_coverage=1 00:34:08.446 --rc genhtml_function_coverage=1 00:34:08.446 --rc genhtml_legend=1 00:34:08.446 --rc geninfo_all_blocks=1 00:34:08.446 --rc geninfo_unexecuted_blocks=1 00:34:08.446 00:34:08.446 ' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.446 --rc genhtml_branch_coverage=1 00:34:08.446 --rc genhtml_function_coverage=1 00:34:08.446 --rc genhtml_legend=1 00:34:08.446 --rc geninfo_all_blocks=1 00:34:08.446 --rc geninfo_unexecuted_blocks=1 00:34:08.446 00:34:08.446 ' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:08.446 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.447 11:28:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:10.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:10.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:10.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.977 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:10.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:34:10.978 00:34:10.978 --- 10.0.0.2 ping statistics --- 00:34:10.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.978 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:34:10.978 00:34:10.978 --- 10.0.0.1 ping statistics --- 00:34:10.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.978 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=382888 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 382888 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 382888 ']' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=37232bf854ddd7936583f270e6090336 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lL6 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 37232bf854ddd7936583f270e6090336 0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 37232bf854ddd7936583f270e6090336 0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=37232bf854ddd7936583f270e6090336 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lL6 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lL6 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lL6 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49015aad53d268504414af88edc9dd6217ce49f60058c5d18826b68bcb51ea5f 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WNI 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49015aad53d268504414af88edc9dd6217ce49f60058c5d18826b68bcb51ea5f 3 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49015aad53d268504414af88edc9dd6217ce49f60058c5d18826b68bcb51ea5f 3 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49015aad53d268504414af88edc9dd6217ce49f60058c5d18826b68bcb51ea5f 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:10.978 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:10.979 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WNI 00:34:10.979 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WNI 00:34:10.979 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.WNI 00:34:10.979 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:10.979 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=625674e1f8b92ee2c4e243cce4b49c5ad189efb7a900001e 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SFA 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 625674e1f8b92ee2c4e243cce4b49c5ad189efb7a900001e 0 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 625674e1f8b92ee2c4e243cce4b49c5ad189efb7a900001e 0 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=625674e1f8b92ee2c4e243cce4b49c5ad189efb7a900001e 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SFA 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SFA 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.SFA 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5c6c68b138f84413ad5fa021352a5b318aba81da661a6bd5 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZPu 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5c6c68b138f84413ad5fa021352a5b318aba81da661a6bd5 2 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5c6c68b138f84413ad5fa021352a5b318aba81da661a6bd5 2 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5c6c68b138f84413ad5fa021352a5b318aba81da661a6bd5 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZPu 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZPu 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZPu 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2cf575b70ca219c10c074352f6ca97b 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.USj 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2cf575b70ca219c10c074352f6ca97b 1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2cf575b70ca219c10c074352f6ca97b 1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2cf575b70ca219c10c074352f6ca97b 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.USj 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.USj 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.USj 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4a61e227db5f523f3c7047a5a34158b4 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.enf 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4a61e227db5f523f3c7047a5a34158b4 1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4a61e227db5f523f3c7047a5a34158b4 1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4a61e227db5f523f3c7047a5a34158b4 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.enf 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.enf 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.enf 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.238 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b0f81f16fdf82186fee998376a0ece04188b436939803c4d 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xXy 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b0f81f16fdf82186fee998376a0ece04188b436939803c4d 2 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b0f81f16fdf82186fee998376a0ece04188b436939803c4d 2 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b0f81f16fdf82186fee998376a0ece04188b436939803c4d 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xXy 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xXy 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xXy 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=24b4423521b88b12e8c23a1f9ec72afd 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.gKH 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 24b4423521b88b12e8c23a1f9ec72afd 0 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 24b4423521b88b12e8c23a1f9ec72afd 0 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=24b4423521b88b12e8c23a1f9ec72afd 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:11.239 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.gKH 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.gKH 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.gKH 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e5ac79d9fbebb11846c3d7484f75f953ed9cc6cbf36af907971e451d3d2cc738 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.O1i 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e5ac79d9fbebb11846c3d7484f75f953ed9cc6cbf36af907971e451d3d2cc738 3 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e5ac79d9fbebb11846c3d7484f75f953ed9cc6cbf36af907971e451d3d2cc738 3 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:11.497 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e5ac79d9fbebb11846c3d7484f75f953ed9cc6cbf36af907971e451d3d2cc738 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.O1i 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.O1i 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.O1i 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 382888 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 382888 ']' 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.498 11:28:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lL6 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.WNI ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WNI 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.SFA 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZPu ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZPu 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.USj 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.756 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.enf ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.enf 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xXy 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.gKH ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.gKH 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.O1i 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:11.757 11:28:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:12.690 Waiting for block devices as requested 00:34:12.690 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:12.948 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:12.948 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:12.948 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:13.205 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:13.205 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:13.205 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:13.205 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:13.463 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:13.463 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:13.463 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:13.721 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:13.721 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:13.721 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:13.721 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:13.979 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:13.979 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:14.545 No valid GPT data, bailing 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:14.545 11:28:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:14.545 00:34:14.545 Discovery Log Number of Records 2, Generation counter 2 00:34:14.545 =====Discovery Log Entry 0====== 00:34:14.545 trtype: tcp 00:34:14.545 adrfam: ipv4 00:34:14.545 subtype: current discovery subsystem 00:34:14.545 treq: not specified, sq flow control disable supported 00:34:14.545 portid: 1 00:34:14.545 trsvcid: 4420 00:34:14.545 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:14.545 traddr: 10.0.0.1 00:34:14.545 eflags: none 00:34:14.545 sectype: none 00:34:14.545 =====Discovery Log Entry 1====== 00:34:14.545 trtype: tcp 00:34:14.545 adrfam: ipv4 00:34:14.545 subtype: nvme subsystem 00:34:14.545 treq: not specified, sq flow control disable supported 00:34:14.545 portid: 1 00:34:14.545 trsvcid: 4420 00:34:14.545 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:14.545 traddr: 10.0.0.1 00:34:14.545 eflags: none 00:34:14.545 sectype: none 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:14.545 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:14.546 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:14.546 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:14.546 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:14.804 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:14.804 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.804 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.805 nvme0n1 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.805 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.064 nvme0n1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.064 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.323 nvme0n1 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.323 11:28:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.582 nvme0n1 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.582 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.840 nvme0n1 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.840 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.841 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.099 nvme0n1 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.099 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.357 11:28:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.615 nvme0n1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.615 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.873 nvme0n1 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.873 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.874 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.132 nvme0n1 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.132 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.391 nvme0n1 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.391 11:28:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.649 nvme0n1 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.649 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.215 11:28:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.473 nvme0n1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.473 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.732 nvme0n1 00:34:18.732 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.732 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.732 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.732 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.732 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.732 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.990 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.991 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.249 nvme0n1 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.249 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.250 11:28:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.507 nvme0n1 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.507 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.508 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.766 nvme0n1 00:34:19.766 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.766 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.766 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.766 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.766 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.766 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:20.024 11:28:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.924 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.182 nvme0n1 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.182 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.183 11:28:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.748 nvme0n1 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.748 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.749 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.315 nvme0n1 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.315 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.316 11:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.882 nvme0n1 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.882 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.448 nvme0n1 00:34:24.448 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.448 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.448 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.448 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.448 11:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.448 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.449 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.388 nvme0n1 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:25.388 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.389 11:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.322 nvme0n1 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:26.322 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.323 11:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.260 nvme0n1 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.260 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.261 11:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.193 nvme0n1 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.193 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.194 11:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.126 nvme0n1 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.126 nvme0n1 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.126 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 nvme0n1 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.384 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.384 11:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.384 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.384 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.642 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.643 nvme0n1 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.643 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.901 nvme0n1 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:29.901 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.902 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.160 nvme0n1 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.160 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.161 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.419 nvme0n1 00:34:30.419 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.419 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.420 11:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.420 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.678 nvme0n1 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.678 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.679 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.938 nvme0n1 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:30.938 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.939 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 nvme0n1 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.457 nvme0n1 00:34:31.457 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.457 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.458 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.458 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.458 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.458 11:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.458 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.716 nvme0n1 00:34:31.716 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.716 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.716 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.716 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.716 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.716 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.975 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.234 nvme0n1 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.234 11:28:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.493 nvme0n1 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.493 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.494 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.752 nvme0n1 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.752 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.010 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.268 nvme0n1 00:34:33.268 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.268 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.269 11:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.836 nvme0n1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.836 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.403 nvme0n1 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.403 11:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.970 nvme0n1 00:34:34.970 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.970 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.971 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.537 nvme0n1 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.537 11:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.537 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.538 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.104 nvme0n1 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.104 11:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.038 nvme0n1 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.038 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.039 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.039 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.039 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.039 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.039 11:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.970 nvme0n1 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:37.970 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.971 11:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.907 nvme0n1 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.907 11:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.843 nvme0n1 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.843 11:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.411 nvme0n1 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.670 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.671 nvme0n1 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.671 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.930 nvme0n1 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.930 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.931 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.931 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:40.931 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.931 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.189 nvme0n1 00:34:41.189 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.190 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.449 nvme0n1 00:34:41.449 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.449 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.449 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.449 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.449 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.449 11:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.449 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.708 nvme0n1 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:41.708 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.709 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.967 nvme0n1 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.967 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.968 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.226 nvme0n1 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.227 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.485 nvme0n1 00:34:42.485 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.485 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.485 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.485 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.485 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.485 11:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.485 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.485 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.486 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.744 nvme0n1 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:42.744 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.745 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.011 nvme0n1 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.011 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.012 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.269 nvme0n1 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:43.269 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.270 11:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.577 nvme0n1 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.577 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.863 nvme0n1 00:34:43.863 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.863 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.863 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.863 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.863 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.863 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.152 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.153 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.430 nvme0n1 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.430 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.431 11:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.700 nvme0n1 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.700 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.267 nvme0n1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.267 11:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.862 nvme0n1 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.862 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.428 nvme0n1 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.428 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.429 11:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.994 nvme0n1 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.994 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.995 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 nvme0n1 00:34:47.561 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.561 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.561 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.561 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.561 11:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcyMzJiZjg1NGRkZDc5MzY1ODNmMjcwZTYwOTAzMzaH2vj3: 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: ]] 00:34:47.561 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDkwMTVhYWQ1M2QyNjg1MDQ0MTRhZjg4ZWRjOWRkNjIxN2NlNDlmNjAwNThjNWQxODgyNmI2OGJjYjUxZWE1ZiaxEQs=: 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.562 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.497 nvme0n1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.497 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.498 11:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.432 nvme0n1 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.432 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.433 11:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.366 nvme0n1 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.366 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjBmODFmMTZmZGY4MjE4NmZlZTk5ODM3NmEwZWNlMDQxODhiNDM2OTM5ODAzYzRkKdQaxQ==: 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: ]] 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjRiNDQyMzUyMWI4OGIxMmU4YzIzYTFmOWVjNzJhZmRzkA9C: 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.367 11:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.301 nvme0n1 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTVhYzc5ZDlmYmViYjExODQ2YzNkNzQ4NGY3NWY5NTNlZDljYzZjYmYzNmFmOTA3OTcxZTQ1MWQzZDJjYzczOK+Qyv4=: 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.301 11:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.235 nvme0n1 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.235 request: 00:34:52.235 { 00:34:52.235 "name": "nvme0", 00:34:52.235 "trtype": "tcp", 00:34:52.235 "traddr": "10.0.0.1", 00:34:52.235 "adrfam": "ipv4", 00:34:52.235 "trsvcid": "4420", 00:34:52.235 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:52.235 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:52.235 "prchk_reftag": false, 00:34:52.235 "prchk_guard": false, 00:34:52.235 "hdgst": false, 00:34:52.235 "ddgst": false, 00:34:52.235 "allow_unrecognized_csi": false, 00:34:52.235 "method": "bdev_nvme_attach_controller", 00:34:52.235 "req_id": 1 00:34:52.235 } 00:34:52.235 Got JSON-RPC error response 00:34:52.235 response: 00:34:52.235 { 00:34:52.235 "code": -5, 00:34:52.235 "message": "Input/output error" 00:34:52.235 } 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:52.235 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.236 request: 00:34:52.236 { 00:34:52.236 "name": "nvme0", 00:34:52.236 "trtype": "tcp", 00:34:52.236 "traddr": "10.0.0.1", 00:34:52.236 "adrfam": "ipv4", 00:34:52.236 "trsvcid": "4420", 00:34:52.236 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:52.236 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:52.236 "prchk_reftag": false, 00:34:52.236 "prchk_guard": false, 00:34:52.236 "hdgst": false, 00:34:52.236 "ddgst": false, 00:34:52.236 "dhchap_key": "key2", 00:34:52.236 "allow_unrecognized_csi": false, 00:34:52.236 "method": "bdev_nvme_attach_controller", 00:34:52.236 "req_id": 1 00:34:52.236 } 00:34:52.236 Got JSON-RPC error response 00:34:52.236 response: 00:34:52.236 { 00:34:52.236 "code": -5, 00:34:52.236 "message": "Input/output error" 00:34:52.236 } 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.236 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.494 request: 00:34:52.494 { 00:34:52.494 "name": "nvme0", 00:34:52.494 "trtype": "tcp", 00:34:52.494 "traddr": "10.0.0.1", 00:34:52.494 "adrfam": "ipv4", 00:34:52.494 "trsvcid": "4420", 00:34:52.494 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:52.494 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:52.494 "prchk_reftag": false, 00:34:52.494 "prchk_guard": false, 00:34:52.494 "hdgst": false, 00:34:52.494 "ddgst": false, 00:34:52.494 "dhchap_key": "key1", 00:34:52.494 "dhchap_ctrlr_key": "ckey2", 00:34:52.494 "allow_unrecognized_csi": false, 00:34:52.494 "method": "bdev_nvme_attach_controller", 00:34:52.494 "req_id": 1 00:34:52.494 } 00:34:52.494 Got JSON-RPC error response 00:34:52.494 response: 00:34:52.494 { 00:34:52.494 "code": -5, 00:34:52.494 "message": "Input/output error" 00:34:52.494 } 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.494 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.495 11:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.495 nvme0n1 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:52.495 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.753 request: 00:34:52.753 { 00:34:52.753 "name": "nvme0", 00:34:52.753 "dhchap_key": "key1", 00:34:52.753 "dhchap_ctrlr_key": "ckey2", 00:34:52.753 "method": "bdev_nvme_set_keys", 00:34:52.753 "req_id": 1 00:34:52.753 } 00:34:52.753 Got JSON-RPC error response 00:34:52.753 response: 00:34:52.753 { 00:34:52.753 "code": -13, 00:34:52.753 "message": "Permission denied" 00:34:52.753 } 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:52.753 11:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:54.127 11:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjI1Njc0ZTFmOGI5MmVlMmM0ZTI0M2NjZTRiNDljNWFkMTg5ZWZiN2E5MDAwMDFlLYW7lg==: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWM2YzY4YjEzOGY4NDQxM2FkNWZhMDIxMzUyYTViMzE4YWJhODFkYTY2MWE2YmQ1WA3EkA==: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.070 nvme0n1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzJjZjU3NWI3MGNhMjE5YzEwYzA3NDM1MmY2Y2E5N2LF8iEA: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGE2MWUyMjdkYjVmNTIzZjNjNzA0N2E1YTM0MTU4YjSfLGx9: 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.070 request: 00:34:55.070 { 00:34:55.070 "name": "nvme0", 00:34:55.070 "dhchap_key": "key2", 00:34:55.070 "dhchap_ctrlr_key": "ckey1", 00:34:55.070 "method": "bdev_nvme_set_keys", 00:34:55.070 "req_id": 1 00:34:55.070 } 00:34:55.070 Got JSON-RPC error response 00:34:55.070 response: 00:34:55.070 { 00:34:55.070 "code": -13, 00:34:55.070 "message": "Permission denied" 00:34:55.070 } 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:55.070 11:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:56.448 rmmod nvme_tcp 00:34:56.448 rmmod nvme_fabrics 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 382888 ']' 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 382888 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 382888 ']' 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 382888 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382888 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382888' 00:34:56.448 killing process with pid 382888 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 382888 00:34:56.448 11:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 382888 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.448 11:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:58.985 11:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:59.923 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:59.923 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:59.923 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:00.863 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:00.863 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lL6 /tmp/spdk.key-null.SFA /tmp/spdk.key-sha256.USj /tmp/spdk.key-sha384.xXy /tmp/spdk.key-sha512.O1i /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:00.863 11:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:02.243 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:02.243 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:02.243 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:02.243 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:02.243 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:02.243 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:02.243 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:02.243 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:02.243 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:02.243 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:02.243 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:02.243 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:02.243 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:02.243 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:02.243 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:02.243 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:02.243 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:02.243 00:35:02.243 real 0m53.866s 00:35:02.243 user 0m51.723s 00:35:02.243 sys 0m6.009s 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.243 ************************************ 00:35:02.243 END TEST nvmf_auth_host 00:35:02.243 ************************************ 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.243 ************************************ 00:35:02.243 START TEST nvmf_digest 00:35:02.243 ************************************ 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:02.243 * Looking for test storage... 00:35:02.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:35:02.243 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.503 --rc genhtml_branch_coverage=1 00:35:02.503 --rc genhtml_function_coverage=1 00:35:02.503 --rc genhtml_legend=1 00:35:02.503 --rc geninfo_all_blocks=1 00:35:02.503 --rc geninfo_unexecuted_blocks=1 00:35:02.503 00:35:02.503 ' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.503 --rc genhtml_branch_coverage=1 00:35:02.503 --rc genhtml_function_coverage=1 00:35:02.503 --rc genhtml_legend=1 00:35:02.503 --rc geninfo_all_blocks=1 00:35:02.503 --rc geninfo_unexecuted_blocks=1 00:35:02.503 00:35:02.503 ' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.503 --rc genhtml_branch_coverage=1 00:35:02.503 --rc genhtml_function_coverage=1 00:35:02.503 --rc genhtml_legend=1 00:35:02.503 --rc geninfo_all_blocks=1 00:35:02.503 --rc geninfo_unexecuted_blocks=1 00:35:02.503 00:35:02.503 ' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:02.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:02.503 --rc genhtml_branch_coverage=1 00:35:02.503 --rc genhtml_function_coverage=1 00:35:02.503 --rc genhtml_legend=1 00:35:02.503 --rc geninfo_all_blocks=1 00:35:02.503 --rc geninfo_unexecuted_blocks=1 00:35:02.503 00:35:02.503 ' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:02.503 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:02.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:02.504 11:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:04.408 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:04.408 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:04.408 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:04.408 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.408 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:04.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:35:04.667 00:35:04.667 --- 10.0.0.2 ping statistics --- 00:35:04.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.667 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:35:04.667 00:35:04.667 --- 10.0.0.1 ping statistics --- 00:35:04.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.667 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.667 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:04.668 ************************************ 00:35:04.668 START TEST nvmf_digest_clean 00:35:04.668 ************************************ 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=392772 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 392772 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 392772 ']' 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.668 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.668 [2024-11-17 11:29:29.281764] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:04.668 [2024-11-17 11:29:29.281854] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.926 [2024-11-17 11:29:29.352706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.926 [2024-11-17 11:29:29.395919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.926 [2024-11-17 11:29:29.395987] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.926 [2024-11-17 11:29:29.396017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.926 [2024-11-17 11:29:29.396030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.926 [2024-11-17 11:29:29.396039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.926 [2024-11-17 11:29:29.396675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.926 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.186 null0 00:35:05.186 [2024-11-17 11:29:29.643669] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.186 [2024-11-17 11:29:29.667935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=392803 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 392803 /var/tmp/bperf.sock 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 392803 ']' 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.186 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.186 [2024-11-17 11:29:29.714096] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:05.186 [2024-11-17 11:29:29.714171] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid392803 ] 00:35:05.186 [2024-11-17 11:29:29.778325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.186 [2024-11-17 11:29:29.823072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.445 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.445 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:05.445 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:05.445 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:05.445 11:29:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:06.012 11:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.012 11:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.270 nvme0n1 00:35:06.270 11:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:06.270 11:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.270 Running I/O for 2 seconds... 00:35:08.579 18965.00 IOPS, 74.08 MiB/s [2024-11-17T10:29:33.237Z] 18893.50 IOPS, 73.80 MiB/s 00:35:08.579 Latency(us) 00:35:08.579 [2024-11-17T10:29:33.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.579 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:08.579 nvme0n1 : 2.01 18924.62 73.92 0.00 0.00 6755.95 3179.71 14078.10 00:35:08.579 [2024-11-17T10:29:33.237Z] =================================================================================================================== 00:35:08.579 [2024-11-17T10:29:33.237Z] Total : 18924.62 73.92 0.00 0.00 6755.95 3179.71 14078.10 00:35:08.579 { 00:35:08.579 "results": [ 00:35:08.579 { 00:35:08.579 "job": "nvme0n1", 00:35:08.579 "core_mask": "0x2", 00:35:08.579 "workload": "randread", 00:35:08.579 "status": "finished", 00:35:08.579 "queue_depth": 128, 00:35:08.579 "io_size": 4096, 00:35:08.579 "runtime": 2.005694, 00:35:08.579 "iops": 18924.621602298255, 00:35:08.579 "mibps": 73.92430313397756, 00:35:08.579 "io_failed": 0, 00:35:08.579 "io_timeout": 0, 00:35:08.579 "avg_latency_us": 6755.948768382155, 00:35:08.579 "min_latency_us": 3179.7096296296295, 00:35:08.579 "max_latency_us": 14078.103703703704 00:35:08.579 } 00:35:08.579 ], 00:35:08.579 "core_count": 1 00:35:08.579 } 00:35:08.579 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:08.579 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:08.579 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:08.579 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:08.579 11:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:08.579 | select(.opcode=="crc32c") 00:35:08.580 | "\(.module_name) \(.executed)"' 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 392803 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 392803 ']' 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 392803 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392803 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392803' 00:35:08.580 killing process with pid 392803 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 392803 00:35:08.580 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.580 00:35:08.580 Latency(us) 00:35:08.580 [2024-11-17T10:29:33.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.580 [2024-11-17T10:29:33.238Z] =================================================================================================================== 00:35:08.580 [2024-11-17T10:29:33.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.580 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 392803 00:35:08.838 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:08.838 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:08.838 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:08.838 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=393271 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 393271 /var/tmp/bperf.sock 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 393271 ']' 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.839 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.839 [2024-11-17 11:29:33.427372] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:08.839 [2024-11-17 11:29:33.427462] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393271 ] 00:35:08.839 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:08.839 Zero copy mechanism will not be used. 00:35:09.097 [2024-11-17 11:29:33.496068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.097 [2024-11-17 11:29:33.541963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.097 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.097 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:09.097 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:09.097 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:09.097 11:29:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:09.356 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.356 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.922 nvme0n1 00:35:09.922 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:09.922 11:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:09.922 Zero copy mechanism will not be used. 00:35:09.922 Running I/O for 2 seconds... 00:35:12.231 5191.00 IOPS, 648.88 MiB/s [2024-11-17T10:29:36.889Z] 4959.00 IOPS, 619.88 MiB/s 00:35:12.231 Latency(us) 00:35:12.231 [2024-11-17T10:29:36.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.231 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:12.231 nvme0n1 : 2.01 4961.32 620.16 0.00 0.00 3220.10 867.75 8009.96 00:35:12.231 [2024-11-17T10:29:36.889Z] =================================================================================================================== 00:35:12.231 [2024-11-17T10:29:36.889Z] Total : 4961.32 620.16 0.00 0.00 3220.10 867.75 8009.96 00:35:12.231 { 00:35:12.231 "results": [ 00:35:12.231 { 00:35:12.231 "job": "nvme0n1", 00:35:12.231 "core_mask": "0x2", 00:35:12.231 "workload": "randread", 00:35:12.231 "status": "finished", 00:35:12.231 "queue_depth": 16, 00:35:12.231 "io_size": 131072, 00:35:12.231 "runtime": 2.005112, 00:35:12.231 "iops": 4961.31886897091, 00:35:12.231 "mibps": 620.1648586213638, 00:35:12.231 "io_failed": 0, 00:35:12.231 "io_timeout": 0, 00:35:12.231 "avg_latency_us": 3220.100816989084, 00:35:12.231 "min_latency_us": 867.7451851851852, 00:35:12.231 "max_latency_us": 8009.955555555555 00:35:12.231 } 00:35:12.231 ], 00:35:12.231 "core_count": 1 00:35:12.231 } 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:12.231 | select(.opcode=="crc32c") 00:35:12.231 | "\(.module_name) \(.executed)"' 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 393271 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 393271 ']' 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 393271 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393271 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393271' 00:35:12.231 killing process with pid 393271 00:35:12.231 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 393271 00:35:12.231 Received shutdown signal, test time was about 2.000000 seconds 00:35:12.231 00:35:12.231 Latency(us) 00:35:12.231 [2024-11-17T10:29:36.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.231 [2024-11-17T10:29:36.890Z] =================================================================================================================== 00:35:12.232 [2024-11-17T10:29:36.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:12.232 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 393271 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=393730 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 393730 /var/tmp/bperf.sock 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 393730 ']' 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.490 11:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:12.490 [2024-11-17 11:29:37.033555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:12.490 [2024-11-17 11:29:37.033636] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393730 ] 00:35:12.490 [2024-11-17 11:29:37.099252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.749 [2024-11-17 11:29:37.146897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.749 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.749 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:12.749 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:12.749 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:12.749 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:13.008 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.008 11:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.576 nvme0n1 00:35:13.576 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:13.576 11:29:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.576 Running I/O for 2 seconds... 00:35:15.891 19105.00 IOPS, 74.63 MiB/s [2024-11-17T10:29:40.549Z] 18712.50 IOPS, 73.10 MiB/s 00:35:15.891 Latency(us) 00:35:15.891 [2024-11-17T10:29:40.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.891 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:15.891 nvme0n1 : 2.01 18711.34 73.09 0.00 0.00 6824.68 2706.39 11747.93 00:35:15.891 [2024-11-17T10:29:40.549Z] =================================================================================================================== 00:35:15.891 [2024-11-17T10:29:40.549Z] Total : 18711.34 73.09 0.00 0.00 6824.68 2706.39 11747.93 00:35:15.891 { 00:35:15.891 "results": [ 00:35:15.891 { 00:35:15.891 "job": "nvme0n1", 00:35:15.891 "core_mask": "0x2", 00:35:15.891 "workload": "randwrite", 00:35:15.891 "status": "finished", 00:35:15.891 "queue_depth": 128, 00:35:15.891 "io_size": 4096, 00:35:15.891 "runtime": 2.008675, 00:35:15.891 "iops": 18711.33956463838, 00:35:15.891 "mibps": 73.09117017436867, 00:35:15.891 "io_failed": 0, 00:35:15.891 "io_timeout": 0, 00:35:15.891 "avg_latency_us": 6824.679685177795, 00:35:15.891 "min_latency_us": 2706.394074074074, 00:35:15.891 "max_latency_us": 11747.934814814815 00:35:15.891 } 00:35:15.891 ], 00:35:15.891 "core_count": 1 00:35:15.891 } 00:35:15.891 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:15.891 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:15.891 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:15.892 | select(.opcode=="crc32c") 00:35:15.892 | "\(.module_name) \(.executed)"' 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 393730 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 393730 ']' 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 393730 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393730 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393730' 00:35:15.892 killing process with pid 393730 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 393730 00:35:15.892 Received shutdown signal, test time was about 2.000000 seconds 00:35:15.892 00:35:15.892 Latency(us) 00:35:15.892 [2024-11-17T10:29:40.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.892 [2024-11-17T10:29:40.550Z] =================================================================================================================== 00:35:15.892 [2024-11-17T10:29:40.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.892 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 393730 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394139 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394139 /var/tmp/bperf.sock 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394139 ']' 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:16.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:16.152 11:29:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:16.152 [2024-11-17 11:29:40.776361] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:16.152 [2024-11-17 11:29:40.776436] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394139 ] 00:35:16.152 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:16.152 Zero copy mechanism will not be used. 00:35:16.411 [2024-11-17 11:29:40.844607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.411 [2024-11-17 11:29:40.888261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.411 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.411 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:16.411 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:16.411 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:16.411 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:16.980 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.980 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:17.238 nvme0n1 00:35:17.238 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:17.238 11:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.497 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:17.497 Zero copy mechanism will not be used. 00:35:17.497 Running I/O for 2 seconds... 00:35:19.368 5150.00 IOPS, 643.75 MiB/s [2024-11-17T10:29:44.026Z] 5016.50 IOPS, 627.06 MiB/s 00:35:19.368 Latency(us) 00:35:19.368 [2024-11-17T10:29:44.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.368 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:19.368 nvme0n1 : 2.00 5012.87 626.61 0.00 0.00 3184.03 2390.85 12718.84 00:35:19.368 [2024-11-17T10:29:44.026Z] =================================================================================================================== 00:35:19.368 [2024-11-17T10:29:44.026Z] Total : 5012.87 626.61 0.00 0.00 3184.03 2390.85 12718.84 00:35:19.368 { 00:35:19.368 "results": [ 00:35:19.368 { 00:35:19.368 "job": "nvme0n1", 00:35:19.368 "core_mask": "0x2", 00:35:19.368 "workload": "randwrite", 00:35:19.368 "status": "finished", 00:35:19.368 "queue_depth": 16, 00:35:19.368 "io_size": 131072, 00:35:19.368 "runtime": 2.00484, 00:35:19.368 "iops": 5012.868857365176, 00:35:19.368 "mibps": 626.608607170647, 00:35:19.368 "io_failed": 0, 00:35:19.368 "io_timeout": 0, 00:35:19.368 "avg_latency_us": 3184.0309964252806, 00:35:19.368 "min_latency_us": 2390.8503703703705, 00:35:19.368 "max_latency_us": 12718.838518518518 00:35:19.368 } 00:35:19.368 ], 00:35:19.368 "core_count": 1 00:35:19.368 } 00:35:19.368 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:19.368 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:19.368 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:19.368 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:19.368 11:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:19.368 | select(.opcode=="crc32c") 00:35:19.368 | "\(.module_name) \(.executed)"' 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394139 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394139 ']' 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394139 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.628 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394139 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394139' 00:35:19.887 killing process with pid 394139 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394139 00:35:19.887 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.887 00:35:19.887 Latency(us) 00:35:19.887 [2024-11-17T10:29:44.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.887 [2024-11-17T10:29:44.545Z] =================================================================================================================== 00:35:19.887 [2024-11-17T10:29:44.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394139 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 392772 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 392772 ']' 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 392772 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 392772 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 392772' 00:35:19.887 killing process with pid 392772 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 392772 00:35:19.887 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 392772 00:35:20.146 00:35:20.146 real 0m15.499s 00:35:20.146 user 0m30.724s 00:35:20.146 sys 0m4.411s 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:20.146 ************************************ 00:35:20.146 END TEST nvmf_digest_clean 00:35:20.146 ************************************ 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:20.146 ************************************ 00:35:20.146 START TEST nvmf_digest_error 00:35:20.146 ************************************ 00:35:20.146 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=394629 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 394629 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 394629 ']' 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:20.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.147 11:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.405 [2024-11-17 11:29:44.835846] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:20.405 [2024-11-17 11:29:44.835957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:20.405 [2024-11-17 11:29:44.908711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.405 [2024-11-17 11:29:44.956381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.405 [2024-11-17 11:29:44.956447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.405 [2024-11-17 11:29:44.956475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.405 [2024-11-17 11:29:44.956486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.405 [2024-11-17 11:29:44.956495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.405 [2024-11-17 11:29:44.957113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.665 [2024-11-17 11:29:45.097903] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.665 null0 00:35:20.665 [2024-11-17 11:29:45.211460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.665 [2024-11-17 11:29:45.235749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=394716 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 394716 /var/tmp/bperf.sock 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 394716 ']' 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:20.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:20.665 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.665 [2024-11-17 11:29:45.282438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:20.665 [2024-11-17 11:29:45.282513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394716 ] 00:35:20.924 [2024-11-17 11:29:45.347717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.924 [2024-11-17 11:29:45.393178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.924 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.924 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:20.924 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:20.924 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:21.183 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:21.183 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.183 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.183 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.183 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.183 11:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:21.751 nvme0n1 00:35:21.751 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:21.751 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.751 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:21.751 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.751 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:21.751 11:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:21.751 Running I/O for 2 seconds... 00:35:21.751 [2024-11-17 11:29:46.337631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:21.751 [2024-11-17 11:29:46.337695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-11-17 11:29:46.337731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-11-17 11:29:46.348119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:21.751 [2024-11-17 11:29:46.348149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-11-17 11:29:46.348182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-11-17 11:29:46.362604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:21.751 [2024-11-17 11:29:46.362633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-11-17 11:29:46.362666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-11-17 11:29:46.377698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:21.751 [2024-11-17 11:29:46.377728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-11-17 11:29:46.377761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-11-17 11:29:46.390705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:21.751 [2024-11-17 11:29:46.390735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-11-17 11:29:46.390767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.751 [2024-11-17 11:29:46.401227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:21.751 [2024-11-17 11:29:46.401254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.751 [2024-11-17 11:29:46.401293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.416139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.416167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.432077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.432104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.432136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.444418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.444447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.444480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.458191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.458235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.458251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.472317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.472347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.472380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.485549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.485581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.485600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.010 [2024-11-17 11:29:46.498267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.010 [2024-11-17 11:29:46.498312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.010 [2024-11-17 11:29:46.498329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.511005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.511035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.511053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.523636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.523685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.523702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.534889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.534916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.534947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.549881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.549909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.549941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.564093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.564122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.564153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.579219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.579250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.579268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.594357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.594388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.594405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.609259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.609290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.609322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.622059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.622088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.622119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.633468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.633495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.633533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.648417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.648447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.648479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.011 [2024-11-17 11:29:46.661343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.011 [2024-11-17 11:29:46.661374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.011 [2024-11-17 11:29:46.661391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.675138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.675165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.675180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.689787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.689819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.689836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.700711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.700740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.700772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.716279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.716308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.716340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.729914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.729945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.729977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.741346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.741376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.741408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.756596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.756625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.756663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.770510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.770548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.770582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.782119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.782146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.782176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.796345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.796387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.796403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.810896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.810926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.810958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.823349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.823379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.823412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.837370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.837414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.837431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.848598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.848627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.848644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.861710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.861739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.861769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.874908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.874954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.874970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.889207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.889251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.889267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.901688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.901718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.901736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.271 [2024-11-17 11:29:46.916177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.271 [2024-11-17 11:29:46.916208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.271 [2024-11-17 11:29:46.916225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.530 [2024-11-17 11:29:46.927877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.530 [2024-11-17 11:29:46.927904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.530 [2024-11-17 11:29:46.927919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.530 [2024-11-17 11:29:46.941629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.530 [2024-11-17 11:29:46.941659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.530 [2024-11-17 11:29:46.941691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.530 [2024-11-17 11:29:46.952392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.530 [2024-11-17 11:29:46.952421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.530 [2024-11-17 11:29:46.952453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.530 [2024-11-17 11:29:46.967761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.530 [2024-11-17 11:29:46.967789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.530 [2024-11-17 11:29:46.967821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.530 [2024-11-17 11:29:46.981352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.530 [2024-11-17 11:29:46.981381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.530 [2024-11-17 11:29:46.981419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.530 [2024-11-17 11:29:46.997833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.530 [2024-11-17 11:29:46.997861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:46.997876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.010638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.010669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.010700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.022125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.022153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.022183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.035982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.036013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.036030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.050973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.051002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.051033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.064190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.064220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.064252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.076660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.076689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.076722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.089387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.089417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.089450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.102990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.103026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.103060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.115232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.115260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.115291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.128861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.128889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.128921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.141238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.141266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.141297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.155693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.155739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.155756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.166807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.166837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.166868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.531 [2024-11-17 11:29:47.181478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.531 [2024-11-17 11:29:47.181520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.531 [2024-11-17 11:29:47.181545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.194679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.194709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.194739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.209398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.209441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.209457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.223199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.223256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.235219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.235246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.235277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.248003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.248031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.248062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.263186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.263214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.263244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.276983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.277042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.289775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.289804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.289836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.305008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.305053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.305070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 18858.00 IOPS, 73.66 MiB/s [2024-11-17T10:29:47.448Z] [2024-11-17 11:29:47.321011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.321039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.321071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.332502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.332551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.332575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.347184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.347211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.347242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.361560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.361591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.361607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.372686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.372713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.372746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.387137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.387181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.790 [2024-11-17 11:29:47.387198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.790 [2024-11-17 11:29:47.402917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.790 [2024-11-17 11:29:47.402959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.791 [2024-11-17 11:29:47.402976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.791 [2024-11-17 11:29:47.413937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.791 [2024-11-17 11:29:47.413978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.791 [2024-11-17 11:29:47.413994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.791 [2024-11-17 11:29:47.427013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.791 [2024-11-17 11:29:47.427041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.791 [2024-11-17 11:29:47.427071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.791 [2024-11-17 11:29:47.439870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:22.791 [2024-11-17 11:29:47.439897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.791 [2024-11-17 11:29:47.439927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.049 [2024-11-17 11:29:47.454037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.049 [2024-11-17 11:29:47.454067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.049 [2024-11-17 11:29:47.454100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.049 [2024-11-17 11:29:47.469200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.049 [2024-11-17 11:29:47.469227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.049 [2024-11-17 11:29:47.469259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.049 [2024-11-17 11:29:47.481932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.049 [2024-11-17 11:29:47.481960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.049 [2024-11-17 11:29:47.481991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.049 [2024-11-17 11:29:47.493778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.049 [2024-11-17 11:29:47.493822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.049 [2024-11-17 11:29:47.493838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.049 [2024-11-17 11:29:47.508531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.049 [2024-11-17 11:29:47.508559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.049 [2024-11-17 11:29:47.508574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.520974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.521001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.521030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.533753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.547520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.547571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.547587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.561001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.561031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.561069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.573083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.573125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.573141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.585292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.585333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.585349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.597559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.597586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.597617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.612289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.612316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.612348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.626558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.626588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.626621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.638289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.638316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.638347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.651559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.651598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.651629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.664644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.664673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.677233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.677265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.677296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.050 [2024-11-17 11:29:47.693443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.050 [2024-11-17 11:29:47.693473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.050 [2024-11-17 11:29:47.693504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.706990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.707019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.707050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.720263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.720292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.720323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.732670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.732701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.732719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.747976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.748007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.748024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.759947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.759990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.760006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.773035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.773064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.773096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.786765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.786827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.800366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.800394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.800425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.813936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.813978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.813994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.826999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.827029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.827062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.838484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.838513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.838554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.852542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.852570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.852602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.864154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.864181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.864211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.878493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.878533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.878553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.893659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.893686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.893718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.906997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.907026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.907065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.921541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.921568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.921599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.935721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.309 [2024-11-17 11:29:47.935765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.309 [2024-11-17 11:29:47.935781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.309 [2024-11-17 11:29:47.951251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.310 [2024-11-17 11:29:47.951280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.310 [2024-11-17 11:29:47.951313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.310 [2024-11-17 11:29:47.963561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.310 [2024-11-17 11:29:47.963591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.310 [2024-11-17 11:29:47.963608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:47.975607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:47.975634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:47.975664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:47.989589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:47.989617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:47.989648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.003676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.003720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.003736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.017604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.017633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.017665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.028851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.028877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.028908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.044290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.044320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.044352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.058423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.058452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.058483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.074061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.074105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.074121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.085557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.085585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.085616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.099491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.099565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.114936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.114965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.568 [2024-11-17 11:29:48.114997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.568 [2024-11-17 11:29:48.129704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.568 [2024-11-17 11:29:48.129747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.129763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.141293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.141354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.155664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.155691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.155722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.169207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.169235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.169265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.183176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.183206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.183238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.196434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.196463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.196495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.207685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.207728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.207745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.569 [2024-11-17 11:29:48.221611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.569 [2024-11-17 11:29:48.221642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.569 [2024-11-17 11:29:48.221660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.235302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.235330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.249629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.249658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.249689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.260697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.260730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.260762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.273721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.273750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.273781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.288115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.288143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.288173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.302226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.302256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.302289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 [2024-11-17 11:29:48.312873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.312899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.312914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 18936.00 IOPS, 73.97 MiB/s [2024-11-17T10:29:48.485Z] [2024-11-17 11:29:48.327149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e7d3f0) 00:35:23.827 [2024-11-17 11:29:48.327176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.827 [2024-11-17 11:29:48.327207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.827 00:35:23.827 Latency(us) 00:35:23.827 [2024-11-17T10:29:48.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.827 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:23.827 nvme0n1 : 2.05 18555.07 72.48 0.00 0.00 6749.63 3325.35 47574.28 00:35:23.828 [2024-11-17T10:29:48.486Z] =================================================================================================================== 00:35:23.828 [2024-11-17T10:29:48.486Z] Total : 18555.07 72.48 0.00 0.00 6749.63 3325.35 47574.28 00:35:23.828 { 00:35:23.828 "results": [ 00:35:23.828 { 00:35:23.828 "job": "nvme0n1", 00:35:23.828 "core_mask": "0x2", 00:35:23.828 "workload": "randread", 00:35:23.828 "status": "finished", 00:35:23.828 "queue_depth": 128, 00:35:23.828 "io_size": 4096, 00:35:23.828 "runtime": 2.047958, 00:35:23.828 "iops": 18555.068023855958, 00:35:23.828 "mibps": 72.48073446818734, 00:35:23.828 "io_failed": 0, 00:35:23.828 "io_timeout": 0, 00:35:23.828 "avg_latency_us": 6749.632803430799, 00:35:23.828 "min_latency_us": 3325.345185185185, 00:35:23.828 "max_latency_us": 47574.281481481485 00:35:23.828 } 00:35:23.828 ], 00:35:23.828 "core_count": 1 00:35:23.828 } 00:35:23.828 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:23.828 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:23.828 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:23.828 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:23.828 | .driver_specific 00:35:23.828 | .nvme_error 00:35:23.828 | .status_code 00:35:23.828 | .command_transient_transport_error' 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 394716 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 394716 ']' 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 394716 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394716 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394716' 00:35:24.086 killing process with pid 394716 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 394716 00:35:24.086 Received shutdown signal, test time was about 2.000000 seconds 00:35:24.086 00:35:24.086 Latency(us) 00:35:24.086 [2024-11-17T10:29:48.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.086 [2024-11-17T10:29:48.744Z] =================================================================================================================== 00:35:24.086 [2024-11-17T10:29:48.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:24.086 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 394716 00:35:24.344 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:24.344 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:24.344 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:24.344 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:24.344 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=395130 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 395130 /var/tmp/bperf.sock 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 395130 ']' 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.345 11:29:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.345 [2024-11-17 11:29:48.915270] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:24.345 [2024-11-17 11:29:48.915341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395130 ] 00:35:24.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:24.345 Zero copy mechanism will not be used. 00:35:24.345 [2024-11-17 11:29:48.981356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.603 [2024-11-17 11:29:49.032012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.603 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:24.603 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:24.603 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.603 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:24.862 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:24.862 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.862 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:24.862 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.862 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:24.862 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:25.428 nvme0n1 00:35:25.428 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:25.428 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.428 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.428 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.428 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:25.428 11:29:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:25.428 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:25.428 Zero copy mechanism will not be used. 00:35:25.428 Running I/O for 2 seconds... 00:35:25.428 [2024-11-17 11:29:50.075210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.428 [2024-11-17 11:29:50.075298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.428 [2024-11-17 11:29:50.075321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.428 [2024-11-17 11:29:50.081428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.428 [2024-11-17 11:29:50.081463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.428 [2024-11-17 11:29:50.081482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.087464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.087497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.688 [2024-11-17 11:29:50.087516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.093298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.688 [2024-11-17 11:29:50.093349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.098864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.098896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.688 [2024-11-17 11:29:50.098914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.104726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.104758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.688 [2024-11-17 11:29:50.104777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.111926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.111959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.688 [2024-11-17 11:29:50.111977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.118177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.118211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.688 [2024-11-17 11:29:50.118230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.688 [2024-11-17 11:29:50.124099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.688 [2024-11-17 11:29:50.124131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.124149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.129778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.129810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.129828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.134965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.134996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.135024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.139852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.139884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.139902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.144752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.144783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.144800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.149777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.149808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.149825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.154558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.154589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.154607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.159549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.159581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.159599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.163370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.163401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.163420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.168873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.168905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.168923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.174305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.174336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.174354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.180740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.180789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.185941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.185973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.185992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.190640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.190671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.190690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.194443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.194473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.194491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.198931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.198961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.198979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.203488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.203520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.203546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.208061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.208091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.208108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.212577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.212606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.212623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.216968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.216998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.217023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.221477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.221506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.221532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.226171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.226202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.226219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.230716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.230745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.230762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.235881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.235912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.235930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.689 [2024-11-17 11:29:50.240785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.689 [2024-11-17 11:29:50.240817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.689 [2024-11-17 11:29:50.240835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.245390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.245421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.245438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.249877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.249907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.249924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.254653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.254683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.254701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.259772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.259812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.259830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.265702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.265733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.265751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.273206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.273238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.273255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.278806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.278837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.278855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.284897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.284929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.284948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.288643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.288674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.288692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.292244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.292275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.292293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.296759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.296790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.296807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.301555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.301586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.301604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.306670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.306702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.306719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.312057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.312088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.312105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.317273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.317304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.317322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.322571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.322602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.322619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.327368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.327400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.327417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.331726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.331756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.331774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.336156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.336187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.336206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.690 [2024-11-17 11:29:50.340623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.690 [2024-11-17 11:29:50.340654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.690 [2024-11-17 11:29:50.340671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.950 [2024-11-17 11:29:50.344976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.950 [2024-11-17 11:29:50.345007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.950 [2024-11-17 11:29:50.345031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.950 [2024-11-17 11:29:50.349438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.950 [2024-11-17 11:29:50.349468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.950 [2024-11-17 11:29:50.349486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.950 [2024-11-17 11:29:50.354541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.950 [2024-11-17 11:29:50.354572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.950 [2024-11-17 11:29:50.354589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.950 [2024-11-17 11:29:50.361174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.950 [2024-11-17 11:29:50.361205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.361223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.368039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.368070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.368088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.373393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.373438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.373454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.378959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.378990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.379007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.383413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.383443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.383461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.388070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.388099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.388116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.392514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.392551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.392568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.397066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.397095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.397112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.401562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.401591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.401608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.406282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.406312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.406329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.410927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.410958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.410975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.415462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.415491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.415508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.419952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.419982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.420014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.424536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.424577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.424593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.428984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.429014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.429037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.433425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.433473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.433491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.437963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.437993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.438010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.443315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.443346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.443364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.448301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.448342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.448374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.452981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.453011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.453044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.457655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.457685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.457703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.951 [2024-11-17 11:29:50.462163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.951 [2024-11-17 11:29:50.462193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.951 [2024-11-17 11:29:50.462224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.466470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.466498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.466540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.470925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.470977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.470994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.475652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.475682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.475700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.480055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.480116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.484510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.484550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.484569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.488925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.488964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.488981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.493621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.493650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.493668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.498011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.498041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.498058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.502485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.502514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.502540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.507004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.507033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.507066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.511390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.511420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.511437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.515853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.515882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.515913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.520355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.520386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.520403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.524756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.524785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.524801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.529748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.529779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.529796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.535442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.535471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.535488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.542925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.542956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.542974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.548815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.548846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.548864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.554247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.554278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.554302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.559472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.559503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.559521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.564870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.564900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.564917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.572020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.572052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.572070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.578568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.578600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.952 [2024-11-17 11:29:50.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:25.952 [2024-11-17 11:29:50.585497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.952 [2024-11-17 11:29:50.585537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.953 [2024-11-17 11:29:50.585557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:25.953 [2024-11-17 11:29:50.591960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.953 [2024-11-17 11:29:50.591992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.953 [2024-11-17 11:29:50.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:25.953 [2024-11-17 11:29:50.597202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.953 [2024-11-17 11:29:50.597233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.953 [2024-11-17 11:29:50.597251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:25.953 [2024-11-17 11:29:50.601969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:25.953 [2024-11-17 11:29:50.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:25.953 [2024-11-17 11:29:50.602018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.606308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.606345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.606363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.610840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.610871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.610889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.615251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.615281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.615298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.619694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.619724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.619741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.624022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.624052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.624069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.628822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.628853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.628871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.633904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.633952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.213 [2024-11-17 11:29:50.638741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.213 [2024-11-17 11:29:50.638772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.213 [2024-11-17 11:29:50.638789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.644229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.644260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.644277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.649595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.649626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.649644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.653690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.653721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.657167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.657198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.657216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.660984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.661013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.661031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.665406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.665437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.665454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.669970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.670000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.670017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.674517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.674553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.674571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.679151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.679181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.679198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.683782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.683811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.683839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.688371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.688402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.688418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.691723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.691754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.691770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.696024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.696055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.696073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.701815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.701846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.701863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.707689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.707721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.707739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.714041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.714073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.714091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.721851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.721883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.721901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.729890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.729921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.729939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.737170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.737203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.737222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.744835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.744868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.744886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.752452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.752484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.752502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.760082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.760114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.760132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.767605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.767637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.767655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.775211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.214 [2024-11-17 11:29:50.775243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.214 [2024-11-17 11:29:50.775261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.214 [2024-11-17 11:29:50.782581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.782621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.782639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.790182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.790214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.790232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.797673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.797704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.797729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.805166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.805198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.805216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.812588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.812619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.812637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.820080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.820112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.820130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.826688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.826721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.826739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.832086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.832118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.832137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.838985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.839017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.839035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.844235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.844267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.844285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.849438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.849470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.849487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.854276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.854313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.854332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.858782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.858811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.858828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.863381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.863410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.863428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.215 [2024-11-17 11:29:50.867902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.215 [2024-11-17 11:29:50.867930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.215 [2024-11-17 11:29:50.867947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.872395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.872425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.872442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.876972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.877002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.877019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.881491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.881521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.881548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.886448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.886477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.886494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.892196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.892225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.892242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.899709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.899739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.899756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.906079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.906110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.906128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.912175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.912206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.912224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.918750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.918782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.918799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.925652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.925683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.925701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.933550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.933583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.933601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.939851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.939882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.939900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.945734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.945765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.945783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.950887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.950918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.950941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.956631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.956661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.956679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.963955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.963986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.964004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.971281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.971312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.971329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.475 [2024-11-17 11:29:50.978492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.475 [2024-11-17 11:29:50.978531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.475 [2024-11-17 11:29:50.978552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:50.984479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:50.984510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:50.984534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:50.989654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:50.989685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:50.989702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:50.994713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:50.994743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:50.994761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:50.999354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:50.999384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:50.999401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.003902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.003937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.003955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.008530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.008560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.008577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.012055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.012086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.012103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.015873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.015901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.015918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.021131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.021162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.021179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.026029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.026060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.026077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.030691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.030720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.030737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.035269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.035299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.035331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.039797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.039826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.039843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.044417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.044445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.044477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.049034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.049079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.049095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.053716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.053760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.053776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.058350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.058393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.058411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 5811.00 IOPS, 726.38 MiB/s [2024-11-17T10:29:51.134Z] [2024-11-17 11:29:51.065064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.065109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.065126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.070014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.070059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.070076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.074621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.074668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.074685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.079324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.079368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.084681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.084718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.084737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.089434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.089464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.089496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.093993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.094022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.094039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.098572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.098602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.098619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.103118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.103149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.476 [2024-11-17 11:29:51.103167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.476 [2024-11-17 11:29:51.107648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.476 [2024-11-17 11:29:51.107678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.477 [2024-11-17 11:29:51.107695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.477 [2024-11-17 11:29:51.112166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.477 [2024-11-17 11:29:51.112198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.477 [2024-11-17 11:29:51.112215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.477 [2024-11-17 11:29:51.116689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.477 [2024-11-17 11:29:51.116719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.477 [2024-11-17 11:29:51.116736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.477 [2024-11-17 11:29:51.120857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.477 [2024-11-17 11:29:51.120889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.477 [2024-11-17 11:29:51.120922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.477 [2024-11-17 11:29:51.125512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.477 [2024-11-17 11:29:51.125550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.477 [2024-11-17 11:29:51.125568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.736 [2024-11-17 11:29:51.129998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.736 [2024-11-17 11:29:51.130028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.736 [2024-11-17 11:29:51.130045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.736 [2024-11-17 11:29:51.134555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.736 [2024-11-17 11:29:51.134593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.736 [2024-11-17 11:29:51.134610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.736 [2024-11-17 11:29:51.138867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.736 [2024-11-17 11:29:51.138897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.736 [2024-11-17 11:29:51.138914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.736 [2024-11-17 11:29:51.142169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.736 [2024-11-17 11:29:51.142199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.736 [2024-11-17 11:29:51.142215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.736 [2024-11-17 11:29:51.145861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.736 [2024-11-17 11:29:51.145891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.736 [2024-11-17 11:29:51.145908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.736 [2024-11-17 11:29:51.150282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.736 [2024-11-17 11:29:51.150313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.736 [2024-11-17 11:29:51.150331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.155341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.155371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.155389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.160951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.160983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.161007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.165011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.165042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.165060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.170953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.170984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.171017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.178938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.178970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.178987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.185082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.185113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.185146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.191052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.191083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.191115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.197231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.197261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.197291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.202934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.202979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.202997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.207709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.207739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.207757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.212314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.212365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.212383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.216927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.216956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.216972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.221654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.221685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.221702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.226215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.226245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.226277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.230856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.230887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.230904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.235561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.235590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.235607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.240247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.240297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.240313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.244848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.244877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.244894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.249469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.249498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.249514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.254079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.254107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.254139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.258829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.258857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.258889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.263611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.263640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.263656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.268376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.268419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.268435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.273530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.273573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.273590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.278054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.278084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.278116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.282610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.282640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.737 [2024-11-17 11:29:51.282657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.737 [2024-11-17 11:29:51.287740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.737 [2024-11-17 11:29:51.287771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.287788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.292920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.292965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.292987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.297581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.297611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.297627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.302402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.302432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.302449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.306954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.306984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.307001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.312462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.312492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.319324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.319370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.319388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.326414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.326447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.326465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.332820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.332851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.332868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.339502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.339542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.339576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.345258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.345290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.345308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.351556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.351588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.351606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.357270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.357301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.357319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.363908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.363940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.363958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.371622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.371654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.371672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.379186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.379218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.379235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.738 [2024-11-17 11:29:51.386956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.738 [2024-11-17 11:29:51.386988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.738 [2024-11-17 11:29:51.387006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.394190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.394221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.394239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.399613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.399644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.399669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.404221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.404252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.404269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.408765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.408795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.408813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.413204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.413234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.413251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.417644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.417689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.417706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.422376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.422406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.422423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.426891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.426921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.426937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.431398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.431427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.431443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.435959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.435988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.436005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.440490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.440532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.440552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.445143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.445173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.445190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.449534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.449565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.449582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.454196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.454227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.454245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.460754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.460816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.466847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.466877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.466909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.472357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.472388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.472405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.477513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.477549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.477567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.483046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.483092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.483109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.488873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.488903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.488920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.495298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.495343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.495362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.500744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.998 [2024-11-17 11:29:51.500774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.998 [2024-11-17 11:29:51.500808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.998 [2024-11-17 11:29:51.507059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.507106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.507124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.512593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.512640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.512658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.517723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.517753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.517787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.523491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.523536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.523558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.529558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.529589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.529620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.535390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.535419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.535456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.541165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.541209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.541226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.547166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.547196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.547229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.553226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.553256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.553289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.559316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.559346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.559378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.565125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.565155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.565173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.571040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.571072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.571089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.578693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.578742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.578760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.585762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.585795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.585828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.593286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.593319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.593337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.600235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.600279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.600296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.608213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.608245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.608263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.614130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.614161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.614178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.619168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.619200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.619218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.623677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.623708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.623725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.628284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.628315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.628331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.632773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.632802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.632819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.637381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.637421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.637443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.641991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.642037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.642054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.646599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.646629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.646647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:26.999 [2024-11-17 11:29:51.651234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:26.999 [2024-11-17 11:29:51.651264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:26.999 [2024-11-17 11:29:51.651281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.655865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.655894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.655925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.660491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.660521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.660549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.665564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.665595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.665612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.670925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.670956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.670973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.675744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.675775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.675792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.680173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.680209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.680227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.684652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.684683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.684700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.689346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.689394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.693908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.693938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.693954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.259 [2024-11-17 11:29:51.698514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.259 [2024-11-17 11:29:51.698552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.259 [2024-11-17 11:29:51.698570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.703214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.703243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.703261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.707744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.707774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.707791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.712893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.712924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.712942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.717639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.717670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.717687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.722418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.722449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.722466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.727089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.727119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.727136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.731810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.731840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.731856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.736478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.736507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.736534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.741138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.741168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.746033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.746063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.746079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.751669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.751699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.751716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.759168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.759198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.759216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.765582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.765611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.765633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.771092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.771121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.771138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.776647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.776678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.776695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.781865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.781896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.781914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.787082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.787113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.787130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.792355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.792387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.792405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.798197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.798229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.798247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.804532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.804564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.810319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.810350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.810368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.816373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.816410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.816429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.260 [2024-11-17 11:29:51.822772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.260 [2024-11-17 11:29:51.822804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.260 [2024-11-17 11:29:51.822821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.829633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.829686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.829702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.835681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.835712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.835730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.841859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.841906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.841923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.848744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.848775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.848792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.856771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.856802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.856820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.864351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.864383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.864401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.872415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.872448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.872466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.880548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.880580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.880598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.888300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.888331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.888349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.894732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.894764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.894782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.902318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.902351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.902368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.261 [2024-11-17 11:29:51.910025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.261 [2024-11-17 11:29:51.910057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.261 [2024-11-17 11:29:51.910075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.917588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.917619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.917637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.925205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.925236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.925254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.932995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.933027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.933045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.939615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.939652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.939686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.944793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.944824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.944842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.949473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.949503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.949520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.954060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.954090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.954123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.958764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.958794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.958827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.963594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.963624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.963640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.968342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.968372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.968389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.973008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.973051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.973067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.977751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.977780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.977796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.982655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.982684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.982701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.987242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.987271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.987288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.991864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.991894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.991911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:51.996557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:51.996587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:51.996604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.001721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.001751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:52.001769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.006599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.006630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:52.006646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.011253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.011282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:52.011299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.015980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.016010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:52.016027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.022144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.022176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:52.022200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.027627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.027660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.521 [2024-11-17 11:29:52.027678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.521 [2024-11-17 11:29:52.033819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.521 [2024-11-17 11:29:52.033850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.033868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.522 [2024-11-17 11:29:52.039498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.039536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.039556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.522 [2024-11-17 11:29:52.043473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.043504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.043521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.522 [2024-11-17 11:29:52.046601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.046632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.046649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.522 [2024-11-17 11:29:52.051215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.051246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.051264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:27.522 [2024-11-17 11:29:52.055626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.055657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.055674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:27.522 [2024-11-17 11:29:52.060779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.060810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.060828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:27.522 5774.50 IOPS, 721.81 MiB/s [2024-11-17T10:29:52.180Z] [2024-11-17 11:29:52.067638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1127930) 00:35:27.522 [2024-11-17 11:29:52.067675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:27.522 [2024-11-17 11:29:52.067693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:27.522 00:35:27.522 Latency(us) 00:35:27.522 [2024-11-17T10:29:52.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.522 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:27.522 nvme0n1 : 2.00 5772.91 721.61 0.00 0.00 2766.91 719.08 12379.02 00:35:27.522 [2024-11-17T10:29:52.180Z] =================================================================================================================== 00:35:27.522 [2024-11-17T10:29:52.180Z] Total : 5772.91 721.61 0.00 0.00 2766.91 719.08 12379.02 00:35:27.522 { 00:35:27.522 "results": [ 00:35:27.522 { 00:35:27.522 "job": "nvme0n1", 00:35:27.522 "core_mask": "0x2", 00:35:27.522 "workload": "randread", 00:35:27.522 "status": "finished", 00:35:27.522 "queue_depth": 16, 00:35:27.522 "io_size": 131072, 00:35:27.522 "runtime": 2.003322, 00:35:27.522 "iops": 5772.911194505926, 00:35:27.522 "mibps": 721.6138993132407, 00:35:27.522 "io_failed": 0, 00:35:27.522 "io_timeout": 0, 00:35:27.522 "avg_latency_us": 2766.9055529615216, 00:35:27.522 "min_latency_us": 719.0755555555555, 00:35:27.522 "max_latency_us": 12379.022222222222 00:35:27.522 } 00:35:27.522 ], 00:35:27.522 "core_count": 1 00:35:27.522 } 00:35:27.522 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:27.522 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:27.522 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:27.522 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:27.522 | .driver_specific 00:35:27.522 | .nvme_error 00:35:27.522 | .status_code 00:35:27.522 | .command_transient_transport_error' 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 373 > 0 )) 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 395130 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 395130 ']' 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 395130 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395130 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395130' 00:35:27.781 killing process with pid 395130 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 395130 00:35:27.781 Received shutdown signal, test time was about 2.000000 seconds 00:35:27.781 00:35:27.781 Latency(us) 00:35:27.781 [2024-11-17T10:29:52.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.781 [2024-11-17T10:29:52.439Z] =================================================================================================================== 00:35:27.781 [2024-11-17T10:29:52.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:27.781 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 395130 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=395534 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 395534 /var/tmp/bperf.sock 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 395534 ']' 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:28.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.051 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.051 [2024-11-17 11:29:52.614412] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:28.051 [2024-11-17 11:29:52.614508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395534 ] 00:35:28.051 [2024-11-17 11:29:52.678086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.310 [2024-11-17 11:29:52.726012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.310 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.310 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:28.310 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:28.310 11:29:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:28.567 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:28.567 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.567 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:28.567 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.567 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:28.567 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.133 nvme0n1 00:35:29.133 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:29.133 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.133 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:29.133 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.133 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:29.133 11:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:29.133 Running I/O for 2 seconds... 00:35:29.133 [2024-11-17 11:29:53.739622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f46d0 00:35:29.133 [2024-11-17 11:29:53.740732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.133 [2024-11-17 11:29:53.740770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:29.133 [2024-11-17 11:29:53.752765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f6458 00:35:29.133 [2024-11-17 11:29:53.753991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.133 [2024-11-17 11:29:53.754035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:29.133 [2024-11-17 11:29:53.765499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e23b8 00:35:29.133 [2024-11-17 11:29:53.766903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.133 [2024-11-17 11:29:53.766930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:29.133 [2024-11-17 11:29:53.778299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166edd58 00:35:29.133 [2024-11-17 11:29:53.779836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.133 [2024-11-17 11:29:53.779865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.791427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f2d80 00:35:29.392 [2024-11-17 11:29:53.793226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.793268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.804247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166fe2e8 00:35:29.392 [2024-11-17 11:29:53.806065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.806091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.812954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166de038 00:35:29.392 [2024-11-17 11:29:53.813746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.813774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.825343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166ef6a8 00:35:29.392 [2024-11-17 11:29:53.826474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.826515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.838135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f1ca0 00:35:29.392 [2024-11-17 11:29:53.839370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.839412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.850223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166fc998 00:35:29.392 [2024-11-17 11:29:53.851129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.851156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.861927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f3e60 00:35:29.392 [2024-11-17 11:29:53.863086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.863115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.873760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e6300 00:35:29.392 [2024-11-17 11:29:53.875033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.875076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.888181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f0788 00:35:29.392 [2024-11-17 11:29:53.889659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.889688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.899559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e5ec8 00:35:29.392 [2024-11-17 11:29:53.900875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.392 [2024-11-17 11:29:53.900916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:29.392 [2024-11-17 11:29:53.911024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e9168 00:35:29.392 [2024-11-17 11:29:53.912172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.912213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.922919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f0788 00:35:29.393 [2024-11-17 11:29:53.924094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.924136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.934192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166f7da8 00:35:29.393 [2024-11-17 11:29:53.935259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.935287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.946076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166fd208 00:35:29.393 [2024-11-17 11:29:53.947123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.947163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.958262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166fc128 00:35:29.393 [2024-11-17 11:29:53.958929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.958979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.972285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166fac10 00:35:29.393 [2024-11-17 11:29:53.973593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.973621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.983725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166de8a8 00:35:29.393 [2024-11-17 11:29:53.984907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.984933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:53.995200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.393 [2024-11-17 11:29:53.995415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:53.995443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:54.008757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.393 [2024-11-17 11:29:54.008978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:54.009018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:54.022537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.393 [2024-11-17 11:29:54.022744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:54.022784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.393 [2024-11-17 11:29:54.036282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.393 [2024-11-17 11:29:54.036480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.393 [2024-11-17 11:29:54.036532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.651 [2024-11-17 11:29:54.050333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.651 [2024-11-17 11:29:54.050544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.651 [2024-11-17 11:29:54.050585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.651 [2024-11-17 11:29:54.064114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.651 [2024-11-17 11:29:54.064332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.651 [2024-11-17 11:29:54.064373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.651 [2024-11-17 11:29:54.077974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.651 [2024-11-17 11:29:54.078188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.651 [2024-11-17 11:29:54.078227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.651 [2024-11-17 11:29:54.091809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.651 [2024-11-17 11:29:54.092027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.651 [2024-11-17 11:29:54.092055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.651 [2024-11-17 11:29:54.105748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.651 [2024-11-17 11:29:54.105942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.105967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.119487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.119698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.119739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.133137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.133389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.147074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.147252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.147292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.160931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.161150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.161177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.174734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.174946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.174986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.188564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.188780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.188820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.202469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.202690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.202733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.216753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.216954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.216995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.231023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.231210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.231251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.245432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.245685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.245713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.259468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.259709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.273768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.273986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.274013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.288058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.288258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.288285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.652 [2024-11-17 11:29:54.302441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.652 [2024-11-17 11:29:54.302680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.652 [2024-11-17 11:29:54.302708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.910 [2024-11-17 11:29:54.316465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.910 [2024-11-17 11:29:54.316697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.910 [2024-11-17 11:29:54.316725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.910 [2024-11-17 11:29:54.330970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.910 [2024-11-17 11:29:54.331171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.910 [2024-11-17 11:29:54.331197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.910 [2024-11-17 11:29:54.345237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.910 [2024-11-17 11:29:54.345451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.910 [2024-11-17 11:29:54.345492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.910 [2024-11-17 11:29:54.359697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.910 [2024-11-17 11:29:54.359923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.910 [2024-11-17 11:29:54.359947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.910 [2024-11-17 11:29:54.374283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.374504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.374555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.388897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.389100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.389127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.403266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.403452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.403501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.417563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.417772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.417799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.432002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.432212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.432252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.446296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.446481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.446529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.460748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.460966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.460992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.475188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.475375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.475415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.489720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.489910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.489937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.504128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.504305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.504332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.517969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.518172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.518198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.532196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.532381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.532429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.546468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.546668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.546696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:29.911 [2024-11-17 11:29:54.560709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:29.911 [2024-11-17 11:29:54.560916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:29.911 [2024-11-17 11:29:54.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.169 [2024-11-17 11:29:54.574654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.169 [2024-11-17 11:29:54.574861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.169 [2024-11-17 11:29:54.574888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.169 [2024-11-17 11:29:54.589261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.169 [2024-11-17 11:29:54.589467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.169 [2024-11-17 11:29:54.589494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.169 [2024-11-17 11:29:54.603676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.169 [2024-11-17 11:29:54.603901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.169 [2024-11-17 11:29:54.603928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.618204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.618404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.618431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.632565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.632757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.632799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.647020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.647239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.647282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.661377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.661602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.661630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.675761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.675980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.676006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.690182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.690370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.690413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.704535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.704755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.704797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.718909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.719112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.719139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 18762.00 IOPS, 73.29 MiB/s [2024-11-17T10:29:54.828Z] [2024-11-17 11:29:54.733328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.733514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.733568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.747816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.748021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.748062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.762122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.762307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.762334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.776225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.776419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.776447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.790701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.790920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.790947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.805124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.805354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.170 [2024-11-17 11:29:54.819540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.170 [2024-11-17 11:29:54.819754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.170 [2024-11-17 11:29:54.819796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.833757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.833947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.833975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.848247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.848467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.848509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.862657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.862879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.862906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.877338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.877521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.877569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.891642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.891861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.891888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.906120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.906303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.906353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.920302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.920488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.920515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.934601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.934836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.934862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.948931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.949163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.949188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.963351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.963557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.963587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.977667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.977887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.977913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:54.992070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:54.992279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:54.992319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:55.006464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:55.006694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.428 [2024-11-17 11:29:55.006722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.428 [2024-11-17 11:29:55.020925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.428 [2024-11-17 11:29:55.021112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.429 [2024-11-17 11:29:55.021140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.429 [2024-11-17 11:29:55.035035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.429 [2024-11-17 11:29:55.035260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.429 [2024-11-17 11:29:55.035299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.429 [2024-11-17 11:29:55.049423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.429 [2024-11-17 11:29:55.049675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.429 [2024-11-17 11:29:55.049702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.429 [2024-11-17 11:29:55.063854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.429 [2024-11-17 11:29:55.064074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.429 [2024-11-17 11:29:55.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.429 [2024-11-17 11:29:55.078243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.429 [2024-11-17 11:29:55.078434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.429 [2024-11-17 11:29:55.078460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.687 [2024-11-17 11:29:55.092397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.687 [2024-11-17 11:29:55.092610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.092637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.106731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.106936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.106979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.121277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.121492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.121540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.135576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.135767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.135810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.150079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.150298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.150340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.164551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.164730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.164758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.179043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.179263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.179303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.193358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.193544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.193594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.207863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.208062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.208103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.222071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.222271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.222300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.236473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.236698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.236727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.250815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.251030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.251056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.265104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.265285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.265326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.279388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.279589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.279623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.293330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.293552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.293594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.307656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.307875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.307902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.321846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.322058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.322099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.688 [2024-11-17 11:29:55.336148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.688 [2024-11-17 11:29:55.336355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.688 [2024-11-17 11:29:55.336382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.350199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.350413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.350453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.364535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.364726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.364769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.378664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.378853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.378880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.393027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.393213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.393253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.407262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.407484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.407531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.421647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.421876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.421903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.436022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.436248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.450390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.450583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.450624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.464858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.465056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.479306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.479493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.479540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.493746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.493951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.493978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.508091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.508308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.508335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.522559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.522735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.522762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.536960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.537190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.537218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.550992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.551205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.551244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.565238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.565437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.565464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.579409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.579637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.579665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.947 [2024-11-17 11:29:55.593723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:30.947 [2024-11-17 11:29:55.593941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.947 [2024-11-17 11:29:55.593968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.607866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.608061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.608088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.622005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.622198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.622224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.635900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.636091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.636117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.649906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.650135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.650170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.663891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.664091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.678035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.678256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.678298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.692045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.692248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.692290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.706031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.706243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.706284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 [2024-11-17 11:29:55.719883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.720093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.720118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 18322.00 IOPS, 71.57 MiB/s [2024-11-17T10:29:55.864Z] [2024-11-17 11:29:55.733922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x571460) with pdu=0x2000166e3d08 00:35:31.206 [2024-11-17 11:29:55.734151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:31.206 [2024-11-17 11:29:55.734178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:31.206 00:35:31.206 Latency(us) 00:35:31.206 [2024-11-17T10:29:55.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.206 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:31.206 nvme0n1 : 2.01 18324.42 71.58 0.00 0.00 6969.09 2779.21 14660.65 00:35:31.206 [2024-11-17T10:29:55.864Z] =================================================================================================================== 00:35:31.206 [2024-11-17T10:29:55.864Z] Total : 18324.42 71.58 0.00 0.00 6969.09 2779.21 14660.65 00:35:31.206 { 00:35:31.206 "results": [ 00:35:31.206 { 00:35:31.206 "job": "nvme0n1", 00:35:31.206 "core_mask": "0x2", 00:35:31.206 "workload": "randwrite", 00:35:31.206 "status": "finished", 00:35:31.206 "queue_depth": 128, 00:35:31.206 "io_size": 4096, 00:35:31.206 "runtime": 2.008467, 00:35:31.206 "iops": 18324.423552888846, 00:35:31.206 "mibps": 71.57977950347205, 00:35:31.206 "io_failed": 0, 00:35:31.206 "io_timeout": 0, 00:35:31.207 "avg_latency_us": 6969.088596650122, 00:35:31.207 "min_latency_us": 2779.211851851852, 00:35:31.207 "max_latency_us": 14660.645925925926 00:35:31.207 } 00:35:31.207 ], 00:35:31.207 "core_count": 1 00:35:31.207 } 00:35:31.207 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:31.207 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:31.207 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:31.207 | .driver_specific 00:35:31.207 | .nvme_error 00:35:31.207 | .status_code 00:35:31.207 | .command_transient_transport_error' 00:35:31.207 11:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 395534 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 395534 ']' 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 395534 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395534 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395534' 00:35:31.466 killing process with pid 395534 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 395534 00:35:31.466 Received shutdown signal, test time was about 2.000000 seconds 00:35:31.466 00:35:31.466 Latency(us) 00:35:31.466 [2024-11-17T10:29:56.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.466 [2024-11-17T10:29:56.124Z] =================================================================================================================== 00:35:31.466 [2024-11-17T10:29:56.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:31.466 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 395534 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396005 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396005 /var/tmp/bperf.sock 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396005 ']' 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:31.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:31.724 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:31.724 [2024-11-17 11:29:56.308882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:31.724 [2024-11-17 11:29:56.308969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396005 ] 00:35:31.724 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:31.724 Zero copy mechanism will not be used. 00:35:31.983 [2024-11-17 11:29:56.382145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.983 [2024-11-17 11:29:56.431801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:31.983 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.983 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:31.983 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:31.983 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:32.241 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:32.241 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.241 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.241 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.241 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:32.241 11:29:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:32.807 nvme0n1 00:35:32.807 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:32.807 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.807 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:32.807 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.807 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:32.807 11:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:32.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:32.807 Zero copy mechanism will not be used. 00:35:32.807 Running I/O for 2 seconds... 00:35:32.807 [2024-11-17 11:29:57.326130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.326237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.326275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.332577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.332667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.332698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.337975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.338050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.338077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.343384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.343472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.343499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.348921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.348996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.349023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.354159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.354245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.354272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.359541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.359620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.359648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.365170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.365238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.365266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.370710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.370781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.807 [2024-11-17 11:29:57.370808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.807 [2024-11-17 11:29:57.376330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.807 [2024-11-17 11:29:57.376414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.376441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.381587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.381684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.386681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.386765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.386792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.391797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.391885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.391912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.396925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.397018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.397045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.401933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.402003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.402029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.407154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.407239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.407266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.412244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.412317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.412344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.417302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.417382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.417409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.422391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.422486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.422518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.427540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.427620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.427647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.432712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.432794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.432827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.437700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.437800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.437829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.443005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.443083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.443111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.448496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.448589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.448616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.454210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.454293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.454320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:32.808 [2024-11-17 11:29:57.459729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:32.808 [2024-11-17 11:29:57.459801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.808 [2024-11-17 11:29:57.459828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.465410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.465522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.465559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.472922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.473079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.473109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.479440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.479590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.479620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.485548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.485657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.485685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.491414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.491577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.491607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.497734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.497902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.497931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.504255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.504384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.504412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.510749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.510917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.510946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.517232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.517413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.517441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.523605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.523736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.523765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.530132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.530274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.530302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.536569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.536735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.536763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.543079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.543262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.543290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.549361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.549539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.549568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.555715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.555841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.555869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.561987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.562116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.562145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.568336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.568506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.568545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.574678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.574856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.574883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.581056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.581176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.581209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.587305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.587475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.587503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.593635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.593794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.593823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.067 [2024-11-17 11:29:57.600009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.067 [2024-11-17 11:29:57.600197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.067 [2024-11-17 11:29:57.600226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.606329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.606508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.606544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.612611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.612730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.612759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.619635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.619836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.619864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.626405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.626559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.626587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.633129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.633286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.633315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.639465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.639580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.639614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.645706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.645875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.645903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.652030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.652214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.652243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.658452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.658600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.658629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.665058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.665254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.665282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.671412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.671561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.671590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.677888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.678015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.678045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.684240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.684407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.684435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.690609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.690793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.690821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.697138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.697290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.697319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.703741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.703859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.703888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.710176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.710297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.710326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.068 [2024-11-17 11:29:57.716817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.068 [2024-11-17 11:29:57.716950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.068 [2024-11-17 11:29:57.716979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.723919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.724090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.724120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.730576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.730678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.736607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.736677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.736704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.742375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.742445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.742473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.747554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.747630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.747656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.752332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.752422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.752449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.757273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.757350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.757376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.762190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.762277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.762304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.767201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.767278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.767305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.772209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.772284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.772311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.327 [2024-11-17 11:29:57.777088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.327 [2024-11-17 11:29:57.777164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.327 [2024-11-17 11:29:57.777190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.782692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.782760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.782787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.788142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.788215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.788241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.793236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.793319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.793352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.798308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.798380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.798406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.803348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.803430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.803456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.808224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.808300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.808326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.813096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.813163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.813190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.818709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.818801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.818828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.824026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.824140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.824168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.829810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.829914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.829942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.836200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.836362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.836390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.842604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.842805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.842833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.849505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.849587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.849613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.855956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.856107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.856135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.863115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.863204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.863231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.870121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.870201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.870228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.876116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.876197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.876224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.882261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.882385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.882413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.887498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.887578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.887615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.892754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.892884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.892913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.898923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.899130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.899158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.904729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.904878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.904906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.909686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.909811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.909839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.914822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.914914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.914940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.919866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.919973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.924759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.924861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.924889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.931052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.328 [2024-11-17 11:29:57.931248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.328 [2024-11-17 11:29:57.931277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.328 [2024-11-17 11:29:57.936800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.936936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.936964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.942390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.942565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.948758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.948876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.948905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.954943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.955139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.955168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.962017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.962103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.962129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.968258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.968328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.968355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.974499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.974629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.974658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.329 [2024-11-17 11:29:57.981571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.329 [2024-11-17 11:29:57.981650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.329 [2024-11-17 11:29:57.981677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:57.988591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:57.988660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:57.988687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:57.994391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:57.994481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:57.994508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.001113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.001206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.001233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.007937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.008015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.008041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.014862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.014977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.015005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.021221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.021344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.021373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.026557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.026640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.026667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.031417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.031558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.031587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.037089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.037224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.037252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.043326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.043457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.049479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.049682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.049712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.056390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.056605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.056634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.061899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.061993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.062023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.066825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.066970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.588 [2024-11-17 11:29:58.066997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.588 [2024-11-17 11:29:58.072430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.588 [2024-11-17 11:29:58.072574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.072602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.077423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.077565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.077604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.082516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.082638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.082666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.087366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.087520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.087556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.092413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.092597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.092627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.098675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.098830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.098864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.104769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.104853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.104881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.111582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.111775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.111803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.118237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.118331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.118359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.124833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.125100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.125129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.130637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.130966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.130994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.136594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.136957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.141964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.142277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.142306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.146617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.146890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.146918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.151155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.151431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.151460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.155714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.156049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.156077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.160428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.160732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.165339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.165634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.165664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.169770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.170003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.170030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.174645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.174909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.174937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.180199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.180490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.180520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.185173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.185505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.185542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.191343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.191639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.191668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.196353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.196616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.200645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.200852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.200879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.204746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.204965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.204992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.209393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.209641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.589 [2024-11-17 11:29:58.209669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.589 [2024-11-17 11:29:58.213791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.589 [2024-11-17 11:29:58.214008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.214036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.590 [2024-11-17 11:29:58.218246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.590 [2024-11-17 11:29:58.218466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.218494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.590 [2024-11-17 11:29:58.222586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.590 [2024-11-17 11:29:58.222792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.222820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.590 [2024-11-17 11:29:58.226803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.590 [2024-11-17 11:29:58.227026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.227054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.590 [2024-11-17 11:29:58.231445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.590 [2024-11-17 11:29:58.231681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.231714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.590 [2024-11-17 11:29:58.235894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.590 [2024-11-17 11:29:58.236107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.236135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.590 [2024-11-17 11:29:58.240455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.590 [2024-11-17 11:29:58.240686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.590 [2024-11-17 11:29:58.240715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.244925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.245128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.245156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.249348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.249564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.249592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.253826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.253920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.253946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.258583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.258805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.258833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.263143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.263372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.263399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.268231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.268458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.268485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.273849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.274097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.274125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.279500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.279821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.279849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.284634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.284853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.284881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.289242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.289518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.289556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.294459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.294719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.299893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.300193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.300222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.305807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.306041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.306069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.310964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.311264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.311292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.316082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.316354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.316381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.849 [2024-11-17 11:29:58.321576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.849 [2024-11-17 11:29:58.321835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.849 [2024-11-17 11:29:58.321864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.849 5426.00 IOPS, 678.25 MiB/s [2024-11-17T10:29:58.507Z] [2024-11-17 11:29:58.328456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.328729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.328757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.332873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.333070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.333099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.337243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.337497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.342377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.342626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.342655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.347174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.347368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.347396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.351314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.351500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.351533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.355484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.355679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.355707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.359993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.360294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.360330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.365100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.365381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.365410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.370446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.370740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.370768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.376478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.376702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.376731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.380797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.381003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.381031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.384950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.385205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.385233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.389315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.389565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.389593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.393571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.393775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.393803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.397718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.397910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.397938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.402129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.402352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.402380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.407257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.407460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.407489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.411345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.411553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.411581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.415422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.415631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.415661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.419699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.419894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.419922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.424744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.424998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.425026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.429775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.430003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.430031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.435496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.435776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.435804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.440583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.440890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.445675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.445938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.445967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.450655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.450794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.850 [2024-11-17 11:29:58.450822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.850 [2024-11-17 11:29:58.455690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.850 [2024-11-17 11:29:58.455864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.455891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.460761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.460941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.460969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.465872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.466041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.466069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.470849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.470982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.471010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.475902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.476072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.476100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.480965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.481167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.486071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.486229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.491218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.491432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.491460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.496345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.496513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.496550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:33.851 [2024-11-17 11:29:58.501419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:33.851 [2024-11-17 11:29:58.501596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:33.851 [2024-11-17 11:29:58.501625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.506485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.506676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.506704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.511549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.511752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.511781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.516571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.516742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.521758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.521906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.521934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.526820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.526996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.527024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.531870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.532075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.532103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.537108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.537282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.537310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.542194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.542354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.542382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.547881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.548104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.553386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.553558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.553586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.558423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.558616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.558644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.563437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.563584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.563612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.568599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.568789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.568817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.573674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.573867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.573894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.578732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.578832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.578859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.583769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.583905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.583933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.588815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.588987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.589016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.593916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.594067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.594095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.598949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.599140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.599168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.603948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.604095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.604122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.609027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.609178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.609206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.614095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.614256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.614285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.619111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.619215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.619249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.624254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.624423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.624451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.629403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.111 [2024-11-17 11:29:58.629500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.111 [2024-11-17 11:29:58.629533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.111 [2024-11-17 11:29:58.634416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.634578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.634606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.639512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.639660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.639688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.644647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.644738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.644765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.649671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.649818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.649845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.654722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.654902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.654929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.659796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.659978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.660006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.664867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.665054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.665088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.669773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.669906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.669934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.674322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.674515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.674555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.679437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.679588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.679618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.685376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.685576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.685604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.690181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.690302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.690329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.694426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.694518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.694553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.698649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.698747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.698774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.703944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.704016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.704043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.708141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.708270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.708297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.712973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.713132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.713161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.718060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.718198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.718226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.723836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.724046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.724074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.728983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.729119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.729146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.733211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.733357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.733384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.737596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.737685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.737711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.741902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.742047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.742074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.746361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.746474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.746501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.750772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.750893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.750921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.755173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.755288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.755316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.759613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.112 [2024-11-17 11:29:58.759731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.112 [2024-11-17 11:29:58.759758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.112 [2024-11-17 11:29:58.764045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.113 [2024-11-17 11:29:58.764157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.113 [2024-11-17 11:29:58.764185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.768471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.768583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.768610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.772881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.773000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.773028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.777241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.777334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.777361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.781634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.781728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.781754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.786072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.786156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.786191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.790511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.790655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.790682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.794877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.794989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.795017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.799231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.799333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.799365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.803489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.803634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.803662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.807953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.808063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.808090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.812233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.812333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.812361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.816453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.816555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.816582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.821344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.821459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.821487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.826151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.826242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.826269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.830304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.830384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.830410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.834468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.834562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.834588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.838635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.838702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.838728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.842786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.842872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.842898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.846973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.847051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.847077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.851119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.851190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.373 [2024-11-17 11:29:58.851217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.373 [2024-11-17 11:29:58.855267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.373 [2024-11-17 11:29:58.855344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.855370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.859430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.859504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.859541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.863584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.863668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.863694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.867765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.867847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.867872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.871903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.871995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.872020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.876027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.876118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.876144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.880178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.880261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.880287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.884334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.884411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.884437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.888494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.888601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.888632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.892641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.892731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.892757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.896848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.896931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.896964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.901060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.901165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.905197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.905276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.905302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.909425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.909509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.909543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.913634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.913730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.913757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.917786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.917872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.917898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.921938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.922016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.922042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.926104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.926183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.926208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.930311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.930394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.930420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.934498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.934611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.934638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.938740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.938818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.938844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.942945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.943028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.943054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.947091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.947170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.947197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.951194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.951262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.951288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.955333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.955416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.955442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.959483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.959568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.959594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.963617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.963694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.374 [2024-11-17 11:29:58.963720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.374 [2024-11-17 11:29:58.967766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.374 [2024-11-17 11:29:58.967844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.967870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.972078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.972154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.972180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.976246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.976329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.976355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.980362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.980445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.980471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.984481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.984560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.984586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.988643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.988723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.988749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.992772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.992847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.992873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:58.996914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:58.996999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:58.997025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.001033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.001104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.001130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.005178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.005250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.005283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.009282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.009360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.009386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.013428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.013496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.013521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.017585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.017670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.017696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.021679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.021755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.021781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.375 [2024-11-17 11:29:59.025857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.375 [2024-11-17 11:29:59.025935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.375 [2024-11-17 11:29:59.025966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.030006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.030092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.030117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.034164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.034233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.034258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.038312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.038389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.038416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.042531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.042680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.042707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.047389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.047569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.047597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.052374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.052536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.052564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.058152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.058374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.058402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.063560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.063688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.063717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.068765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.068915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.068943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.073789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.073989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.074018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.078923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.079084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.079112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.083999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.084163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.084191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.089125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.089259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.089286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.094277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.094432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.094460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.099277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.099395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.099422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.104347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.104467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.104496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.109444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.109576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.109605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.114517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.114663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.119607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.119713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.119741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.124785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.124929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.124957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.129803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.130019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.130055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.134819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.134909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.134936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.139109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.139186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.139212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.143332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.143480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.143507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.147546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.147684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.151813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.151915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.151943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.156000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.156102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.156130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.160197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.160316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.160343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.164485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.164706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.164735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.169522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.169680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.169710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.174908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.175016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.175043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.180774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.180897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.180925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.185093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.185178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.185205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.189432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.189602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.189631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.193596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.193690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.193716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.197780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.197879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.197905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.201922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.202017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.202047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.206079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.206173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.206199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.210279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.210425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.210453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.214477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.214603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.214631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.218659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.218754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.218784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.222821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.222908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.222934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.227004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.227089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.227115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.231191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.231274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.231300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.235365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.235463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.235494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.239540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.239639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.239666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.243816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.243979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.244012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.248959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.249199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.249227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.638 [2024-11-17 11:29:59.253986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.638 [2024-11-17 11:29:59.254135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.638 [2024-11-17 11:29:59.254163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.259521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.259715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.259742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.264812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.265040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.265068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.271005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.271157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.271185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.276464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.276629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.276657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.281374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.281499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.281532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.285741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.285842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.285870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.639 [2024-11-17 11:29:59.290026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.639 [2024-11-17 11:29:59.290160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.639 [2024-11-17 11:29:59.290196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.294376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.294517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.294555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.298534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.298631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.298658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.302703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.302812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.302839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.306901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.307009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.307037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.311350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.311590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.311618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.316390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.316543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.316571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.897 [2024-11-17 11:29:59.321852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x5717a0) with pdu=0x2000166ff3c8 00:35:34.897 [2024-11-17 11:29:59.321986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.897 [2024-11-17 11:29:59.322018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:34.897 6051.00 IOPS, 756.38 MiB/s 00:35:34.897 Latency(us) 00:35:34.897 [2024-11-17T10:29:59.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:34.897 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:34.897 nvme0n1 : 2.00 6048.45 756.06 0.00 0.00 2638.65 1905.40 10145.94 00:35:34.897 [2024-11-17T10:29:59.555Z] =================================================================================================================== 00:35:34.897 [2024-11-17T10:29:59.555Z] Total : 6048.45 756.06 0.00 0.00 2638.65 1905.40 10145.94 00:35:34.897 { 00:35:34.897 "results": [ 00:35:34.897 { 00:35:34.897 "job": "nvme0n1", 00:35:34.897 "core_mask": "0x2", 00:35:34.897 "workload": "randwrite", 00:35:34.897 "status": "finished", 00:35:34.897 "queue_depth": 16, 00:35:34.897 "io_size": 131072, 00:35:34.897 "runtime": 2.003984, 00:35:34.897 "iops": 6048.451484642592, 00:35:34.897 "mibps": 756.056435580324, 00:35:34.897 "io_failed": 0, 00:35:34.897 "io_timeout": 0, 00:35:34.897 "avg_latency_us": 2638.6471875257816, 00:35:34.897 "min_latency_us": 1905.3985185185186, 00:35:34.897 "max_latency_us": 10145.943703703704 00:35:34.897 } 00:35:34.897 ], 00:35:34.897 "core_count": 1 00:35:34.897 } 00:35:34.897 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:34.897 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:34.897 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:34.897 | .driver_specific 00:35:34.897 | .nvme_error 00:35:34.897 | .status_code 00:35:34.897 | .command_transient_transport_error' 00:35:34.897 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396005 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396005 ']' 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396005 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396005 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396005' 00:35:35.155 killing process with pid 396005 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396005 00:35:35.155 Received shutdown signal, test time was about 2.000000 seconds 00:35:35.155 00:35:35.155 Latency(us) 00:35:35.155 [2024-11-17T10:29:59.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.155 [2024-11-17T10:29:59.813Z] =================================================================================================================== 00:35:35.155 [2024-11-17T10:29:59.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.155 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396005 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 394629 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 394629 ']' 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 394629 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394629 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394629' 00:35:35.414 killing process with pid 394629 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 394629 00:35:35.414 11:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 394629 00:35:35.689 00:35:35.690 real 0m15.313s 00:35:35.690 user 0m30.630s 00:35:35.690 sys 0m4.363s 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:35.690 ************************************ 00:35:35.690 END TEST nvmf_digest_error 00:35:35.690 ************************************ 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:35.690 rmmod nvme_tcp 00:35:35.690 rmmod nvme_fabrics 00:35:35.690 rmmod nvme_keyring 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 394629 ']' 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 394629 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 394629 ']' 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 394629 00:35:35.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (394629) - No such process 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 394629 is not found' 00:35:35.690 Process with pid 394629 is not found 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:35.690 11:30:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.657 00:35:37.657 real 0m35.423s 00:35:37.657 user 1m2.288s 00:35:37.657 sys 0m10.457s 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:37.657 ************************************ 00:35:37.657 END TEST nvmf_digest 00:35:37.657 ************************************ 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.657 ************************************ 00:35:37.657 START TEST nvmf_bdevperf 00:35:37.657 ************************************ 00:35:37.657 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:37.916 * Looking for test storage... 00:35:37.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.916 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:37.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.917 --rc genhtml_branch_coverage=1 00:35:37.917 --rc genhtml_function_coverage=1 00:35:37.917 --rc genhtml_legend=1 00:35:37.917 --rc geninfo_all_blocks=1 00:35:37.917 --rc geninfo_unexecuted_blocks=1 00:35:37.917 00:35:37.917 ' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:37.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.917 --rc genhtml_branch_coverage=1 00:35:37.917 --rc genhtml_function_coverage=1 00:35:37.917 --rc genhtml_legend=1 00:35:37.917 --rc geninfo_all_blocks=1 00:35:37.917 --rc geninfo_unexecuted_blocks=1 00:35:37.917 00:35:37.917 ' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:37.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.917 --rc genhtml_branch_coverage=1 00:35:37.917 --rc genhtml_function_coverage=1 00:35:37.917 --rc genhtml_legend=1 00:35:37.917 --rc geninfo_all_blocks=1 00:35:37.917 --rc geninfo_unexecuted_blocks=1 00:35:37.917 00:35:37.917 ' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:37.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.917 --rc genhtml_branch_coverage=1 00:35:37.917 --rc genhtml_function_coverage=1 00:35:37.917 --rc genhtml_legend=1 00:35:37.917 --rc geninfo_all_blocks=1 00:35:37.917 --rc geninfo_unexecuted_blocks=1 00:35:37.917 00:35:37.917 ' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:37.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:37.917 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.918 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:37.918 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:37.918 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.918 11:30:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:40.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:40.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:40.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:40.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:40.453 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:40.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:40.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:35:40.454 00:35:40.454 --- 10.0.0.2 ping statistics --- 00:35:40.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.454 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:40.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:40.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:35:40.454 00:35:40.454 --- 10.0.0.1 ping statistics --- 00:35:40.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:40.454 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=398531 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 398531 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 398531 ']' 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 [2024-11-17 11:30:04.724962] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:40.454 [2024-11-17 11:30:04.725040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:40.454 [2024-11-17 11:30:04.798010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.454 [2024-11-17 11:30:04.843122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:40.454 [2024-11-17 11:30:04.843184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:40.454 [2024-11-17 11:30:04.843211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:40.454 [2024-11-17 11:30:04.843222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:40.454 [2024-11-17 11:30:04.843231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:40.454 [2024-11-17 11:30:04.844707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:40.454 [2024-11-17 11:30:04.844768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:40.454 [2024-11-17 11:30:04.844772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 [2024-11-17 11:30:04.987276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.454 11:30:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 Malloc0 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:40.454 [2024-11-17 11:30:05.054799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:40.454 { 00:35:40.454 "params": { 00:35:40.454 "name": "Nvme$subsystem", 00:35:40.454 "trtype": "$TEST_TRANSPORT", 00:35:40.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:40.454 "adrfam": "ipv4", 00:35:40.454 "trsvcid": "$NVMF_PORT", 00:35:40.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:40.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:40.454 "hdgst": ${hdgst:-false}, 00:35:40.454 "ddgst": ${ddgst:-false} 00:35:40.454 }, 00:35:40.454 "method": "bdev_nvme_attach_controller" 00:35:40.454 } 00:35:40.454 EOF 00:35:40.454 )") 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:40.454 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:40.455 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:40.455 11:30:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:40.455 "params": { 00:35:40.455 "name": "Nvme1", 00:35:40.455 "trtype": "tcp", 00:35:40.455 "traddr": "10.0.0.2", 00:35:40.455 "adrfam": "ipv4", 00:35:40.455 "trsvcid": "4420", 00:35:40.455 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:40.455 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:40.455 "hdgst": false, 00:35:40.455 "ddgst": false 00:35:40.455 }, 00:35:40.455 "method": "bdev_nvme_attach_controller" 00:35:40.455 }' 00:35:40.455 [2024-11-17 11:30:05.102948] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:40.455 [2024-11-17 11:30:05.103022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398564 ] 00:35:40.718 [2024-11-17 11:30:05.172256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.718 [2024-11-17 11:30:05.221391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.976 Running I/O for 1 seconds... 00:35:42.352 8801.00 IOPS, 34.38 MiB/s 00:35:42.352 Latency(us) 00:35:42.352 [2024-11-17T10:30:07.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.352 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:42.352 Verification LBA range: start 0x0 length 0x4000 00:35:42.352 Nvme1n1 : 1.01 8844.34 34.55 0.00 0.00 14407.51 1201.49 14951.92 00:35:42.352 [2024-11-17T10:30:07.010Z] =================================================================================================================== 00:35:42.352 [2024-11-17T10:30:07.010Z] Total : 8844.34 34.55 0.00 0.00 14407.51 1201.49 14951.92 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=398819 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.352 { 00:35:42.352 "params": { 00:35:42.352 "name": "Nvme$subsystem", 00:35:42.352 "trtype": "$TEST_TRANSPORT", 00:35:42.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.352 "adrfam": "ipv4", 00:35:42.352 "trsvcid": "$NVMF_PORT", 00:35:42.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.352 "hdgst": ${hdgst:-false}, 00:35:42.352 "ddgst": ${ddgst:-false} 00:35:42.352 }, 00:35:42.352 "method": "bdev_nvme_attach_controller" 00:35:42.352 } 00:35:42.352 EOF 00:35:42.352 )") 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:42.352 11:30:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.352 "params": { 00:35:42.352 "name": "Nvme1", 00:35:42.352 "trtype": "tcp", 00:35:42.352 "traddr": "10.0.0.2", 00:35:42.352 "adrfam": "ipv4", 00:35:42.352 "trsvcid": "4420", 00:35:42.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:42.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:42.352 "hdgst": false, 00:35:42.352 "ddgst": false 00:35:42.352 }, 00:35:42.352 "method": "bdev_nvme_attach_controller" 00:35:42.352 }' 00:35:42.352 [2024-11-17 11:30:06.822746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:42.352 [2024-11-17 11:30:06.822832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398819 ] 00:35:42.352 [2024-11-17 11:30:06.890093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.352 [2024-11-17 11:30:06.938137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.611 Running I/O for 15 seconds... 00:35:44.918 8384.00 IOPS, 32.75 MiB/s [2024-11-17T10:30:09.838Z] 8431.00 IOPS, 32.93 MiB/s [2024-11-17T10:30:09.838Z] 11:30:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 398531 00:35:45.180 11:30:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:45.180 [2024-11-17 11:30:09.788611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.180 [2024-11-17 11:30:09.788662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.180 [2024-11-17 11:30:09.788693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.180 [2024-11-17 11:30:09.788710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.180 [2024-11-17 11:30:09.788728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.180 [2024-11-17 11:30:09.788744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.180 [2024-11-17 11:30:09.788760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.180 [2024-11-17 11:30:09.788775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.180 [2024-11-17 11:30:09.788801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.180 [2024-11-17 11:30:09.788816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.180 [2024-11-17 11:30:09.788847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.180 [2024-11-17 11:30:09.788862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.180 [2024-11-17 11:30:09.788878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.788891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.788924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.788937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.788952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.788980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.788995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.181 [2024-11-17 11:30:09.789945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.789987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.789999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.790013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.790024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.790038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.790049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.181 [2024-11-17 11:30:09.790063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.181 [2024-11-17 11:30:09.790074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:43336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:43352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:43368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:43376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:45.182 [2024-11-17 11:30:09.790329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.790979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.790990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.182 [2024-11-17 11:30:09.791187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.182 [2024-11-17 11:30:09.791199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.791996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.183 [2024-11-17 11:30:09.792308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.183 [2024-11-17 11:30:09.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.184 [2024-11-17 11:30:09.792345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb8f30 is same with the state(6) to be set 00:35:45.184 [2024-11-17 11:30:09.792373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:45.184 [2024-11-17 11:30:09.792383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:45.184 [2024-11-17 11:30:09.792393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43112 len:8 PRP1 0x0 PRP2 0x0 00:35:45.184 [2024-11-17 11:30:09.792404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.184 [2024-11-17 11:30:09.792567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.184 [2024-11-17 11:30:09.792608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.184 [2024-11-17 11:30:09.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:45.184 [2024-11-17 11:30:09.792661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:45.184 [2024-11-17 11:30:09.792673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.184 [2024-11-17 11:30:09.795841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.184 [2024-11-17 11:30:09.795892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.184 [2024-11-17 11:30:09.796625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.184 [2024-11-17 11:30:09.796655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.184 [2024-11-17 11:30:09.796672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.184 [2024-11-17 11:30:09.796971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.184 [2024-11-17 11:30:09.797214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.184 [2024-11-17 11:30:09.797233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.184 [2024-11-17 11:30:09.797248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.184 [2024-11-17 11:30:09.797262] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.184 [2024-11-17 11:30:09.810161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.184 [2024-11-17 11:30:09.810722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.184 [2024-11-17 11:30:09.810766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.184 [2024-11-17 11:30:09.810783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.184 [2024-11-17 11:30:09.811072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.184 [2024-11-17 11:30:09.811315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.184 [2024-11-17 11:30:09.811333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.184 [2024-11-17 11:30:09.811345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.184 [2024-11-17 11:30:09.811356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.184 [2024-11-17 11:30:09.824096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.184 [2024-11-17 11:30:09.824579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.184 [2024-11-17 11:30:09.824621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.184 [2024-11-17 11:30:09.824643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.184 [2024-11-17 11:30:09.824948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.184 [2024-11-17 11:30:09.825190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.184 [2024-11-17 11:30:09.825208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.184 [2024-11-17 11:30:09.825220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.184 [2024-11-17 11:30:09.825231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.838376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.838807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.838836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.838867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.839151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.839393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.839411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.839423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.839434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.852556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.853040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.853083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.853100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.853404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.853709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.853730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.853743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.853755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.866868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.867360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.867421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.867438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.867732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.868024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.868043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.868055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.868067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.880934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.881379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.881422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.881439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.881754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.882016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.882035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.882047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.882058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.895695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.896161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.896203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.896220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.896540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.896829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.896851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.896864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.896877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.910312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.910704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.910732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.910748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.911054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.911303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.911322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.911339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.911352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.924898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.925352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.925401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.925418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.925698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.925975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.925993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.926006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.926017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.939429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.939847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.939875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.939891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.940188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.940446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.940465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.940477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.940489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.953861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.954305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.954349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.954381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.954673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.954966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.954984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.954996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.955007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.968099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.968545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.968574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.968590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.968875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.969135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.969153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.969166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.969177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.982272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.982738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.982766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.982782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.983073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.983337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.983356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.983368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.983379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:09.996372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:09.996809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:09.996836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:09.996852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:09.997136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.444 [2024-11-17 11:30:09.997378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.444 [2024-11-17 11:30:09.997396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.444 [2024-11-17 11:30:09.997408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.444 [2024-11-17 11:30:09.997419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.444 [2024-11-17 11:30:10.011070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.444 [2024-11-17 11:30:10.011583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.444 [2024-11-17 11:30:10.011615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.444 [2024-11-17 11:30:10.011639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.444 [2024-11-17 11:30:10.011939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.445 [2024-11-17 11:30:10.012211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.445 [2024-11-17 11:30:10.012230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.445 [2024-11-17 11:30:10.012243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.445 [2024-11-17 11:30:10.012254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.445 [2024-11-17 11:30:10.026045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.445 [2024-11-17 11:30:10.026547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.445 [2024-11-17 11:30:10.026602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.445 [2024-11-17 11:30:10.026619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.445 [2024-11-17 11:30:10.026917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.445 [2024-11-17 11:30:10.027175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.445 [2024-11-17 11:30:10.027195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.445 [2024-11-17 11:30:10.027208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.445 [2024-11-17 11:30:10.027220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.445 [2024-11-17 11:30:10.040664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.445 [2024-11-17 11:30:10.041103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.445 [2024-11-17 11:30:10.041132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.445 [2024-11-17 11:30:10.041149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.445 [2024-11-17 11:30:10.041421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.445 [2024-11-17 11:30:10.041705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.445 [2024-11-17 11:30:10.041727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.445 [2024-11-17 11:30:10.041742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.445 [2024-11-17 11:30:10.041755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.445 [2024-11-17 11:30:10.055583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.445 [2024-11-17 11:30:10.056016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.445 [2024-11-17 11:30:10.056064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.445 [2024-11-17 11:30:10.056080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.445 [2024-11-17 11:30:10.056408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.445 [2024-11-17 11:30:10.056702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.445 [2024-11-17 11:30:10.056724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.445 [2024-11-17 11:30:10.056738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.445 [2024-11-17 11:30:10.056752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.445 [2024-11-17 11:30:10.070230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.445 [2024-11-17 11:30:10.070676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.445 [2024-11-17 11:30:10.070705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.445 [2024-11-17 11:30:10.070721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.445 [2024-11-17 11:30:10.071025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.445 [2024-11-17 11:30:10.071267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.445 [2024-11-17 11:30:10.071285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.445 [2024-11-17 11:30:10.071312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.445 [2024-11-17 11:30:10.071324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.445 [2024-11-17 11:30:10.084758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.445 [2024-11-17 11:30:10.085194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.445 [2024-11-17 11:30:10.085221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.445 [2024-11-17 11:30:10.085237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.445 [2024-11-17 11:30:10.085512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.445 [2024-11-17 11:30:10.085837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.445 [2024-11-17 11:30:10.085857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.445 [2024-11-17 11:30:10.085884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.445 [2024-11-17 11:30:10.085896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.099552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.100052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.100094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.100110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.100427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.100728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.100750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.100769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.100782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.113770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.114235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.114277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.114292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.114623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.114935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.114970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.114983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.114995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.128032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.128421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.128482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.128497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.128845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.129162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.129181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.129194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.129206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.142607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.142995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.143035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.143051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.143347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.143678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.143700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.143713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.143725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.156977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.157402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.157431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.157447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.157777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.158039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.158057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.158069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.158080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.171520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.171899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.171924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.171938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.172199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.172483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.172504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.172549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.172562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.186064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.186520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.186554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.186570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.186857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.187116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.187134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.187146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.187157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.200521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.200931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.200958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.200978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.201265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.201532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.201551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.201563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.201591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 [2024-11-17 11:30:10.215075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.215503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.215538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.215555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.215852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.705 [2024-11-17 11:30:10.216116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.705 [2024-11-17 11:30:10.216134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.705 [2024-11-17 11:30:10.216145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.705 [2024-11-17 11:30:10.216156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.705 7226.33 IOPS, 28.23 MiB/s [2024-11-17T10:30:10.363Z] [2024-11-17 11:30:10.229464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.705 [2024-11-17 11:30:10.229995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.705 [2024-11-17 11:30:10.230024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.705 [2024-11-17 11:30:10.230040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.705 [2024-11-17 11:30:10.230330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.230618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.230639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.230652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.230664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.243684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.244114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.244141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.244157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.244456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.244759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.244780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.244794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.244820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.258141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.258627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.258655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.258671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.258966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.259241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.259262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.259274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.259287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.272475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.272889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.272932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.272947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.273247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.273540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.273561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.273590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.273602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.286728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.287225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.287267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.287283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.287578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.287873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.287891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.287908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.287919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.301108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.301500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.301572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.301588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.301877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.302134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.302152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.302164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.302175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.315551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.316044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.316087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.316103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.316408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.316719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.316741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.316754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.316781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.329782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.330271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.330298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.330329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.330646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.330944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.330962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.330974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.330985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.344262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.706 [2024-11-17 11:30:10.344639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.706 [2024-11-17 11:30:10.344667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.706 [2024-11-17 11:30:10.344683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.706 [2024-11-17 11:30:10.344968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.706 [2024-11-17 11:30:10.345244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.706 [2024-11-17 11:30:10.345262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.706 [2024-11-17 11:30:10.345274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.706 [2024-11-17 11:30:10.345284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.706 [2024-11-17 11:30:10.359046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.359577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.359606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.359623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.359908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.360200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.360233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.360245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.360257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.373313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.373758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.373786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.373801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.374086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.374383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.374402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.374414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.374425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.387714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.388159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.388187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.388226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.388510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.388805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.388826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.388839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.388850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.402025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.402456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.402483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.402498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.402807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.403100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.403120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.403133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.403145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.416315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.416732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.416760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.416777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.417074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.417334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.417352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.417364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.417375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.430700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.431198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.431250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.431265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.431603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.431881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.431899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.431911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.431922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.445202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.445734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.445787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.445802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.446111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.446390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.446425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.446438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.446450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.459571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.460034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.460062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.460095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.967 [2024-11-17 11:30:10.460380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.967 [2024-11-17 11:30:10.460690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.967 [2024-11-17 11:30:10.460727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.967 [2024-11-17 11:30:10.460741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.967 [2024-11-17 11:30:10.460754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.967 [2024-11-17 11:30:10.473915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.967 [2024-11-17 11:30:10.474335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.967 [2024-11-17 11:30:10.474378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.967 [2024-11-17 11:30:10.474394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.474676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.474968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.474986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.475003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.475014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.488401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.488825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.488854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.488870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.489178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.489419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.489437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.489449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.489475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.502895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.503404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.503446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.503463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.503756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.504036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.504055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.504067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.504078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.517124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.517573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.517600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.517616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.517940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.518220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.518239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.518267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.518278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.531199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.531650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.531677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.531692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.532005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.532297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.532317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.532330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.532343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.545542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.545972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.546013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.546029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.546326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.546638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.546673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.546686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.546697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.559921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.560506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.560541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.560559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.560844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.561123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.561142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.561169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.561181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.574466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.574879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.574908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.574929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.575220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.575462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.575480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.575492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.575518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.588633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.589087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.589154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.589170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.589481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.589774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.589795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.589808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.589820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.602875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.968 [2024-11-17 11:30:10.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.968 [2024-11-17 11:30:10.603327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.968 [2024-11-17 11:30:10.603343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.968 [2024-11-17 11:30:10.603653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.968 [2024-11-17 11:30:10.603942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.968 [2024-11-17 11:30:10.603960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.968 [2024-11-17 11:30:10.603972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.968 [2024-11-17 11:30:10.603998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.968 [2024-11-17 11:30:10.617263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.969 [2024-11-17 11:30:10.617782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:45.969 [2024-11-17 11:30:10.617846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:45.969 [2024-11-17 11:30:10.617863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:45.969 [2024-11-17 11:30:10.618162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:45.969 [2024-11-17 11:30:10.618460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.969 [2024-11-17 11:30:10.618496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.969 [2024-11-17 11:30:10.618510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.969 [2024-11-17 11:30:10.618532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.228 [2024-11-17 11:30:10.631778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.228 [2024-11-17 11:30:10.632254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.228 [2024-11-17 11:30:10.632307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.228 [2024-11-17 11:30:10.632323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.228 [2024-11-17 11:30:10.632647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.228 [2024-11-17 11:30:10.632957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.228 [2024-11-17 11:30:10.632975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.228 [2024-11-17 11:30:10.633001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.228 [2024-11-17 11:30:10.633012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.228 [2024-11-17 11:30:10.646001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.228 [2024-11-17 11:30:10.646540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.228 [2024-11-17 11:30:10.646584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.228 [2024-11-17 11:30:10.646599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.228 [2024-11-17 11:30:10.646908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.228 [2024-11-17 11:30:10.647150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.228 [2024-11-17 11:30:10.647168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.228 [2024-11-17 11:30:10.647180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.228 [2024-11-17 11:30:10.647191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.228 [2024-11-17 11:30:10.659891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.228 [2024-11-17 11:30:10.660332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.228 [2024-11-17 11:30:10.660358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.228 [2024-11-17 11:30:10.660372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.228 [2024-11-17 11:30:10.660685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.228 [2024-11-17 11:30:10.660953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.228 [2024-11-17 11:30:10.660971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.228 [2024-11-17 11:30:10.660988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.228 [2024-11-17 11:30:10.660999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.228 [2024-11-17 11:30:10.673862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.228 [2024-11-17 11:30:10.674274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.674301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.674317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.674629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.674888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.674921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.674933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.674945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.687831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.688311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.688337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.688367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.688674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.688937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.688955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.688967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.688978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.701673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.702022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.702047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.702062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.702321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.702589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.702609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.702621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.702633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.715534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.715946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.715987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.716002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.716300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.716567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.716587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.716599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.716610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.729490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.729841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.729903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.729944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.730202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.730444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.730461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.730473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.730484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.743491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.743940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.743967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.743983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.744283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.744551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.744585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.744598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.744609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.757657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.758059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.758084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.758119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.758401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.758674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.758694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.758706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.758718] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.771767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.772218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.772246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.772262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.772585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.772858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.772878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.772891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.772903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.786077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.786540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.229 [2024-11-17 11:30:10.786568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.229 [2024-11-17 11:30:10.786584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.229 [2024-11-17 11:30:10.786876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.229 [2024-11-17 11:30:10.787126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.229 [2024-11-17 11:30:10.787144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.229 [2024-11-17 11:30:10.787157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.229 [2024-11-17 11:30:10.787168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.229 [2024-11-17 11:30:10.800252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.229 [2024-11-17 11:30:10.800710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.230 [2024-11-17 11:30:10.800737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.230 [2024-11-17 11:30:10.800753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.230 [2024-11-17 11:30:10.801053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.230 [2024-11-17 11:30:10.801299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.230 [2024-11-17 11:30:10.801318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.230 [2024-11-17 11:30:10.801330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.230 [2024-11-17 11:30:10.801341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.230 [2024-11-17 11:30:10.814424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.230 [2024-11-17 11:30:10.815099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.230 [2024-11-17 11:30:10.815142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.230 [2024-11-17 11:30:10.815159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.230 [2024-11-17 11:30:10.815481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.230 [2024-11-17 11:30:10.815791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.230 [2024-11-17 11:30:10.815814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.230 [2024-11-17 11:30:10.815828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.230 [2024-11-17 11:30:10.815840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.230 [2024-11-17 11:30:10.828649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.230 [2024-11-17 11:30:10.829066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.230 [2024-11-17 11:30:10.829092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.230 [2024-11-17 11:30:10.829123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.230 [2024-11-17 11:30:10.829428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.230 [2024-11-17 11:30:10.829723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.230 [2024-11-17 11:30:10.829743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.230 [2024-11-17 11:30:10.829756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.230 [2024-11-17 11:30:10.829768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.230 [2024-11-17 11:30:10.842662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.230 [2024-11-17 11:30:10.843158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.230 [2024-11-17 11:30:10.843200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.230 [2024-11-17 11:30:10.843217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.230 [2024-11-17 11:30:10.843541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.230 [2024-11-17 11:30:10.843791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.230 [2024-11-17 11:30:10.843825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.230 [2024-11-17 11:30:10.843842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.230 [2024-11-17 11:30:10.843854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.230 [2024-11-17 11:30:10.856806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.230 [2024-11-17 11:30:10.857244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.230 [2024-11-17 11:30:10.857271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.230 [2024-11-17 11:30:10.857287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.230 [2024-11-17 11:30:10.857611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.230 [2024-11-17 11:30:10.857913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.230 [2024-11-17 11:30:10.857932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.230 [2024-11-17 11:30:10.857945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.230 [2024-11-17 11:30:10.857956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.230 [2024-11-17 11:30:10.871041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.230 [2024-11-17 11:30:10.871437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.230 [2024-11-17 11:30:10.871465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.230 [2024-11-17 11:30:10.871482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.230 [2024-11-17 11:30:10.871787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.230 [2024-11-17 11:30:10.872050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.230 [2024-11-17 11:30:10.872069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.230 [2024-11-17 11:30:10.872081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.230 [2024-11-17 11:30:10.872092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.489 [2024-11-17 11:30:10.885615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.489 [2024-11-17 11:30:10.886007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.489 [2024-11-17 11:30:10.886033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.489 [2024-11-17 11:30:10.886048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.489 [2024-11-17 11:30:10.886330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.489 [2024-11-17 11:30:10.886582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.489 [2024-11-17 11:30:10.886601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.489 [2024-11-17 11:30:10.886613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.489 [2024-11-17 11:30:10.886624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.489 [2024-11-17 11:30:10.899775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.489 [2024-11-17 11:30:10.900126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.489 [2024-11-17 11:30:10.900166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.489 [2024-11-17 11:30:10.900181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.489 [2024-11-17 11:30:10.900476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.489 [2024-11-17 11:30:10.900750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.489 [2024-11-17 11:30:10.900769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.489 [2024-11-17 11:30:10.900782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.489 [2024-11-17 11:30:10.900793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.489 [2024-11-17 11:30:10.913749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.489 [2024-11-17 11:30:10.914180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.489 [2024-11-17 11:30:10.914222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.489 [2024-11-17 11:30:10.914238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.489 [2024-11-17 11:30:10.914573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.489 [2024-11-17 11:30:10.914825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.489 [2024-11-17 11:30:10.914843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.489 [2024-11-17 11:30:10.914856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.489 [2024-11-17 11:30:10.914882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.489 [2024-11-17 11:30:10.927631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.489 [2024-11-17 11:30:10.928040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.489 [2024-11-17 11:30:10.928080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.489 [2024-11-17 11:30:10.928095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.489 [2024-11-17 11:30:10.928386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.489 [2024-11-17 11:30:10.928677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.489 [2024-11-17 11:30:10.928697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.489 [2024-11-17 11:30:10.928710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.489 [2024-11-17 11:30:10.928721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.489 [2024-11-17 11:30:10.941634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.489 [2024-11-17 11:30:10.942013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:10.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:10.942060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:10.942350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:10.942619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:10.942648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:10.942660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:10.942673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:10.955745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:10.956236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:10.956279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:10.956295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:10.956613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:10.956899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:10.956917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:10.956929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:10.956940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:10.969984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:10.970508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:10.970585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:10.970602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:10.970906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:10.971164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:10.971183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:10.971195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:10.971206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:10.984113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:10.984540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:10.984583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:10.984599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:10.984907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:10.985170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:10.985190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:10.985202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:10.985213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:10.998465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:10.998996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:10.999047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:10.999063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:10.999359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:10.999665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:10.999687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:10.999701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:10.999713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:11.013114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:11.013536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:11.013565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:11.013581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:11.013863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:11.014132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:11.014152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:11.014165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:11.014177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:11.027652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:11.028092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:11.028129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:11.028161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:11.028457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:11.028758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:11.028780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:11.028799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:11.028812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:11.042006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:11.042423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:11.042467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:11.042483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:11.042779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:11.043042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:11.043060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:11.043073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:11.043084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:11.056252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:11.056704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:11.056732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:11.056748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:11.057050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:11.057293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:11.057311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:11.057323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.490 [2024-11-17 11:30:11.057334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.490 [2024-11-17 11:30:11.070921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.490 [2024-11-17 11:30:11.071375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.490 [2024-11-17 11:30:11.071422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.490 [2024-11-17 11:30:11.071438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.490 [2024-11-17 11:30:11.071719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.490 [2024-11-17 11:30:11.072010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.490 [2024-11-17 11:30:11.072044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.490 [2024-11-17 11:30:11.072057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.491 [2024-11-17 11:30:11.072069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.491 [2024-11-17 11:30:11.085534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.491 [2024-11-17 11:30:11.086000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.491 [2024-11-17 11:30:11.086051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.491 [2024-11-17 11:30:11.086087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.491 [2024-11-17 11:30:11.086380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.491 [2024-11-17 11:30:11.086674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.491 [2024-11-17 11:30:11.086696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.491 [2024-11-17 11:30:11.086710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.491 [2024-11-17 11:30:11.086722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.491 [2024-11-17 11:30:11.099957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.491 [2024-11-17 11:30:11.100379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.491 [2024-11-17 11:30:11.100426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.491 [2024-11-17 11:30:11.100443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.491 [2024-11-17 11:30:11.100740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.491 [2024-11-17 11:30:11.101007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.491 [2024-11-17 11:30:11.101025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.491 [2024-11-17 11:30:11.101037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.491 [2024-11-17 11:30:11.101047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.491 [2024-11-17 11:30:11.114182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.491 [2024-11-17 11:30:11.114539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.491 [2024-11-17 11:30:11.114591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.491 [2024-11-17 11:30:11.114607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.491 [2024-11-17 11:30:11.114903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.491 [2024-11-17 11:30:11.115144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.491 [2024-11-17 11:30:11.115163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.491 [2024-11-17 11:30:11.115175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.491 [2024-11-17 11:30:11.115186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.491 [2024-11-17 11:30:11.128189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.491 [2024-11-17 11:30:11.128671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.491 [2024-11-17 11:30:11.128699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.491 [2024-11-17 11:30:11.128721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.491 [2024-11-17 11:30:11.129027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.491 [2024-11-17 11:30:11.129269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.491 [2024-11-17 11:30:11.129287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.491 [2024-11-17 11:30:11.129300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.491 [2024-11-17 11:30:11.129311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.491 [2024-11-17 11:30:11.142629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.491 [2024-11-17 11:30:11.143127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.491 [2024-11-17 11:30:11.143168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.491 [2024-11-17 11:30:11.143184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.491 [2024-11-17 11:30:11.143483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.491 [2024-11-17 11:30:11.143802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.491 [2024-11-17 11:30:11.143823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.491 [2024-11-17 11:30:11.143837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.491 [2024-11-17 11:30:11.143850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 [2024-11-17 11:30:11.156743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.157182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.750 [2024-11-17 11:30:11.157209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.750 [2024-11-17 11:30:11.157224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.750 [2024-11-17 11:30:11.157503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.750 [2024-11-17 11:30:11.157800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.750 [2024-11-17 11:30:11.157834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.750 [2024-11-17 11:30:11.157846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.750 [2024-11-17 11:30:11.157858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 [2024-11-17 11:30:11.170819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.171297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.750 [2024-11-17 11:30:11.171339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.750 [2024-11-17 11:30:11.171355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.750 [2024-11-17 11:30:11.171660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.750 [2024-11-17 11:30:11.171948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.750 [2024-11-17 11:30:11.171967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.750 [2024-11-17 11:30:11.171979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.750 [2024-11-17 11:30:11.171990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 [2024-11-17 11:30:11.184893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.185367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.750 [2024-11-17 11:30:11.185394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.750 [2024-11-17 11:30:11.185424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.750 [2024-11-17 11:30:11.185730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.750 [2024-11-17 11:30:11.186009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.750 [2024-11-17 11:30:11.186028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.750 [2024-11-17 11:30:11.186040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.750 [2024-11-17 11:30:11.186051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 [2024-11-17 11:30:11.198999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.750 [2024-11-17 11:30:11.199403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.750 [2024-11-17 11:30:11.199419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.750 [2024-11-17 11:30:11.199738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.750 [2024-11-17 11:30:11.200017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.750 [2024-11-17 11:30:11.200035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.750 [2024-11-17 11:30:11.200048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.750 [2024-11-17 11:30:11.200059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 [2024-11-17 11:30:11.212911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.213361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.750 [2024-11-17 11:30:11.213388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.750 [2024-11-17 11:30:11.213403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.750 [2024-11-17 11:30:11.213697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.750 [2024-11-17 11:30:11.213961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.750 [2024-11-17 11:30:11.213979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.750 [2024-11-17 11:30:11.213996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.750 [2024-11-17 11:30:11.214007] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 5419.75 IOPS, 21.17 MiB/s [2024-11-17T10:30:11.408Z] [2024-11-17 11:30:11.226937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.227412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.750 [2024-11-17 11:30:11.227439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.750 [2024-11-17 11:30:11.227454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.750 [2024-11-17 11:30:11.227773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.750 [2024-11-17 11:30:11.228051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.750 [2024-11-17 11:30:11.228069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.750 [2024-11-17 11:30:11.228081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.750 [2024-11-17 11:30:11.228092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.750 [2024-11-17 11:30:11.240967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.750 [2024-11-17 11:30:11.241315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.241357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.241373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.241650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.241914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.241932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.241944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.241955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.254825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.255216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.255242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.255258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.255559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.255823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.255842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.255854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.255866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.268791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.269170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.269197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.269212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.269500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.269790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.269809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.269821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.269833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.282755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.283230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.283271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.283288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.283589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.283853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.283871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.283883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.283894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.296915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.297328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.297370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.297385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.297703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.297985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.298003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.298015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.298026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.310903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.311315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.311357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.311377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.311677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.311959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.311978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.311990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.312001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.324796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.325207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.325234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.325249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.325560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.325832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.325852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.325865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.325877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.339138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.339551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.339578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.339594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.339894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.340136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.340154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.340165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.340177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.353150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.353563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.353605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.353622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.751 [2024-11-17 11:30:11.353940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.751 [2024-11-17 11:30:11.354187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.751 [2024-11-17 11:30:11.354205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.751 [2024-11-17 11:30:11.354217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.751 [2024-11-17 11:30:11.354228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.751 [2024-11-17 11:30:11.367309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.751 [2024-11-17 11:30:11.367757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.751 [2024-11-17 11:30:11.367785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.751 [2024-11-17 11:30:11.367800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.752 [2024-11-17 11:30:11.368083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.752 [2024-11-17 11:30:11.368325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.752 [2024-11-17 11:30:11.368343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.752 [2024-11-17 11:30:11.368355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.752 [2024-11-17 11:30:11.368366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.752 [2024-11-17 11:30:11.381294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.752 [2024-11-17 11:30:11.381838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.752 [2024-11-17 11:30:11.381881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.752 [2024-11-17 11:30:11.381898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.752 [2024-11-17 11:30:11.382202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.752 [2024-11-17 11:30:11.382443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.752 [2024-11-17 11:30:11.382461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.752 [2024-11-17 11:30:11.382473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.752 [2024-11-17 11:30:11.382484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.752 [2024-11-17 11:30:11.395393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.752 [2024-11-17 11:30:11.395813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:46.752 [2024-11-17 11:30:11.395840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:46.752 [2024-11-17 11:30:11.395856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:46.752 [2024-11-17 11:30:11.396156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:46.752 [2024-11-17 11:30:11.396398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.752 [2024-11-17 11:30:11.396416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.752 [2024-11-17 11:30:11.396433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.752 [2024-11-17 11:30:11.396445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.409579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.409977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.410005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.011 [2024-11-17 11:30:11.410022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.011 [2024-11-17 11:30:11.410307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.011 [2024-11-17 11:30:11.410648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.011 [2024-11-17 11:30:11.410683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.011 [2024-11-17 11:30:11.410696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.011 [2024-11-17 11:30:11.410709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.423441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.423967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.423994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.011 [2024-11-17 11:30:11.424025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.011 [2024-11-17 11:30:11.424324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.011 [2024-11-17 11:30:11.424611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.011 [2024-11-17 11:30:11.424632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.011 [2024-11-17 11:30:11.424645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.011 [2024-11-17 11:30:11.424657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.437466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.437865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.437894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.011 [2024-11-17 11:30:11.437912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.011 [2024-11-17 11:30:11.438201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.011 [2024-11-17 11:30:11.438445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.011 [2024-11-17 11:30:11.438465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.011 [2024-11-17 11:30:11.438479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.011 [2024-11-17 11:30:11.438491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.451395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.451867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.451917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.011 [2024-11-17 11:30:11.451932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.011 [2024-11-17 11:30:11.452243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.011 [2024-11-17 11:30:11.452485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.011 [2024-11-17 11:30:11.452503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.011 [2024-11-17 11:30:11.452515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.011 [2024-11-17 11:30:11.452550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.465433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.465822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.465890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.011 [2024-11-17 11:30:11.465925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.011 [2024-11-17 11:30:11.466183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.011 [2024-11-17 11:30:11.466425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.011 [2024-11-17 11:30:11.466443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.011 [2024-11-17 11:30:11.466455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.011 [2024-11-17 11:30:11.466466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.479333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.479805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.479857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.011 [2024-11-17 11:30:11.479872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.011 [2024-11-17 11:30:11.480164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.011 [2024-11-17 11:30:11.480406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.011 [2024-11-17 11:30:11.480423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.011 [2024-11-17 11:30:11.480435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.011 [2024-11-17 11:30:11.480446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.011 [2024-11-17 11:30:11.493427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.011 [2024-11-17 11:30:11.493960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.011 [2024-11-17 11:30:11.493987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.494007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.494288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.494555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.494575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.494588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.494614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.507438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.507881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.507923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.507939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.508251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.508494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.508512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.508546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.508561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.521378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.521863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.521889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.521920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.522199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.522441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.522459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.522471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.522482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.535354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.535742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.535770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.535785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.536072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.536319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.536337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.536349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.536360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.549244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.549723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.549751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.549767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.550050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.550292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.550310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.550322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.550333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.563223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.563698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.563725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.563741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.564039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.564282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.564300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.564312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.564323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.577214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.577673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.577717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.577734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.578022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.578264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.578283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.578299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.578311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.591675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.592172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.592213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.592230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.592569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.592820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.592839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.592866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.592877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.605623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.606028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.606097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.606113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.606410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.606684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.606704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.606716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.606728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.012 [2024-11-17 11:30:11.619790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.012 [2024-11-17 11:30:11.620268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.012 [2024-11-17 11:30:11.620318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.012 [2024-11-17 11:30:11.620333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.012 [2024-11-17 11:30:11.620663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.012 [2024-11-17 11:30:11.620953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.012 [2024-11-17 11:30:11.620987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.012 [2024-11-17 11:30:11.620999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.012 [2024-11-17 11:30:11.621011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.013 [2024-11-17 11:30:11.633641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.013 [2024-11-17 11:30:11.634166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-17 11:30:11.634219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.013 [2024-11-17 11:30:11.634234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.013 [2024-11-17 11:30:11.634558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.013 [2024-11-17 11:30:11.634843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.013 [2024-11-17 11:30:11.634863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.013 [2024-11-17 11:30:11.634875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.013 [2024-11-17 11:30:11.634886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.013 [2024-11-17 11:30:11.647595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.013 [2024-11-17 11:30:11.648001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-17 11:30:11.648028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.013 [2024-11-17 11:30:11.648044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.013 [2024-11-17 11:30:11.648345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.013 [2024-11-17 11:30:11.648616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.013 [2024-11-17 11:30:11.648636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.013 [2024-11-17 11:30:11.648648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.013 [2024-11-17 11:30:11.648660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.013 [2024-11-17 11:30:11.661472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.013 [2024-11-17 11:30:11.661905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.013 [2024-11-17 11:30:11.661933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.013 [2024-11-17 11:30:11.661948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.013 [2024-11-17 11:30:11.662218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.013 [2024-11-17 11:30:11.662499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.013 [2024-11-17 11:30:11.662519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.013 [2024-11-17 11:30:11.662560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.013 [2024-11-17 11:30:11.662574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.272 [2024-11-17 11:30:11.675496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.272 [2024-11-17 11:30:11.676006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.272 [2024-11-17 11:30:11.676033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.272 [2024-11-17 11:30:11.676052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.272 [2024-11-17 11:30:11.676354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.272 [2024-11-17 11:30:11.676626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.272 [2024-11-17 11:30:11.676647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.272 [2024-11-17 11:30:11.676659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.272 [2024-11-17 11:30:11.676671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.272 [2024-11-17 11:30:11.689493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.689908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.689935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.689951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.690252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.690494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.690512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.690532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.690561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.703382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.703766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.703793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.703808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.704100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.704358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.704376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.704388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.704399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.717354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.717774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.717801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.717816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.718116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.718363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.718381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.718393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.718404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.731310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.731700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.731769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.732053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.732295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.732313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.732325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.732336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.745245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.745786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.745813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.745828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.746113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.746354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.746372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.746384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.746395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.759168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.759584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.759628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.759644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.759949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.760192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.760209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.760229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.760240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.773372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.773859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.773907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.773924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.774216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.774466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.774484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.774496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.774508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.787851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.788305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.788342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.788376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.788688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.788983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.789002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.789014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.789026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.802205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.802650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.273 [2024-11-17 11:30:11.802678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.273 [2024-11-17 11:30:11.802694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.273 [2024-11-17 11:30:11.802989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.273 [2024-11-17 11:30:11.803239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.273 [2024-11-17 11:30:11.803257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.273 [2024-11-17 11:30:11.803270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.273 [2024-11-17 11:30:11.803281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.273 [2024-11-17 11:30:11.816586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.273 [2024-11-17 11:30:11.817077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.817109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.817141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.817436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.817715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.817735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.817748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.817760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.830708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.831175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.831217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.831234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.831774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.832061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.832080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.832093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.832104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.845142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.845604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.845633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.845650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.845943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.846192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.846210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.846223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.846234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.859375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.859808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.859837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.859858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.860151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.860400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.860419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.860432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.860443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.873650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.874039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.874066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.874096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.874384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.874687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.874710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.874737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.874749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.888056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.888483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.888511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.888537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.888850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.889100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.889118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.889131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.889142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.902283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.902723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.902765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.902782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.903078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.903349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.903368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.903381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.903392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.274 [2024-11-17 11:30:11.916594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.274 [2024-11-17 11:30:11.917051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.274 [2024-11-17 11:30:11.917078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.274 [2024-11-17 11:30:11.917094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.274 [2024-11-17 11:30:11.917383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.274 [2024-11-17 11:30:11.917682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.274 [2024-11-17 11:30:11.917704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.274 [2024-11-17 11:30:11.917733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.274 [2024-11-17 11:30:11.917745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.534 [2024-11-17 11:30:11.931321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.534 [2024-11-17 11:30:11.931773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.534 [2024-11-17 11:30:11.931801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.534 [2024-11-17 11:30:11.931819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.534 [2024-11-17 11:30:11.932116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.534 [2024-11-17 11:30:11.932415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.534 [2024-11-17 11:30:11.932436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.534 [2024-11-17 11:30:11.932452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.534 [2024-11-17 11:30:11.932466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:11.945611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:11.946033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:11.946074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:11.946091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:11.946406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:11.946726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:11.946747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:11.946780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:11.946799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:11.959923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:11.960414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:11.960441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:11.960473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:11.960754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:11.961043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:11.961062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:11.961074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:11.961086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:11.974162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:11.974611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:11.974640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:11.974656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:11.974952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:11.975202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:11.975220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:11.975233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:11.975244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:11.988320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:11.988724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:11.988751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:11.988766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:11.989040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:11.989289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:11.989308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:11.989321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:11.989332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.002750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.003165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.003207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.003223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.003520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.003826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.003847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.003876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.003888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.017191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.017561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.017605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.017621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.017905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.018163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.018182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.018194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.018206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.031498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.031987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.032015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.032031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.032325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.032605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.032626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.032639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.032651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.045850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.046277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.046304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.046326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.046620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.046898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.046918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.046930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.046941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.060145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.060697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.060725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.060742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.061050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.061299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.061318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.061331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.061342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.074312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.074765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.074793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.074810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.075106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.075356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.075374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.075386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.075398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.088443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.088872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.088901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.088917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.089214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.089469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.089488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.089500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.089536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.102602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.103063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.103091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.103106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.103393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.103674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.103695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.103708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.103719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.116888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.117334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.117362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.117394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.117686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.117956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.117976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.117988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.117999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.131120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.131607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.131634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.131650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.131945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.132215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.132234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.132251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.132263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.145458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.145871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.145912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.145928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.146236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.146486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.146504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.146516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.146552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.159791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.160242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.160284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.160301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.160607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.160878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.160897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.160909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.160921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.173992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.535 [2024-11-17 11:30:12.174365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.535 [2024-11-17 11:30:12.174407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.535 [2024-11-17 11:30:12.174422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.535 [2024-11-17 11:30:12.174755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.535 [2024-11-17 11:30:12.175022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.535 [2024-11-17 11:30:12.175041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.535 [2024-11-17 11:30:12.175054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.535 [2024-11-17 11:30:12.175065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.535 [2024-11-17 11:30:12.188697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.189102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.189129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.189145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.189430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.189742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.189764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.189777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.796 [2024-11-17 11:30:12.189789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.796 [2024-11-17 11:30:12.202921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.203422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.203465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.203481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.203791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.204058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.204076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.204089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.796 [2024-11-17 11:30:12.204100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.796 [2024-11-17 11:30:12.217079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.217600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.217644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.217945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.218195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.218214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.218227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.796 [2024-11-17 11:30:12.218238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.796 4335.80 IOPS, 16.94 MiB/s [2024-11-17T10:30:12.454Z] [2024-11-17 11:30:12.231345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.231758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.231787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.231809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.232114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.232363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.232381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.232393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.796 [2024-11-17 11:30:12.232404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.796 [2024-11-17 11:30:12.245673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.246106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.246147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.246164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.246458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.246745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.246766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.246778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.796 [2024-11-17 11:30:12.246790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.796 [2024-11-17 11:30:12.260064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.260540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.260569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.260585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.260868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.261133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.261152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.261164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.796 [2024-11-17 11:30:12.261175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.796 [2024-11-17 11:30:12.274357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.796 [2024-11-17 11:30:12.274768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.796 [2024-11-17 11:30:12.274796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.796 [2024-11-17 11:30:12.274812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.796 [2024-11-17 11:30:12.275107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.796 [2024-11-17 11:30:12.275361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.796 [2024-11-17 11:30:12.275380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.796 [2024-11-17 11:30:12.275392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.275403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.288652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.289059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.289100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.289116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.289405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.289704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.289725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.289738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.289750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.302918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.303486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.303535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.303554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.303862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.304111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.304130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.304142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.304153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.317238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.317661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.317689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.317706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.317999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.318249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.318267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.318284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.318296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.331425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.331958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.331986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.332002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.332311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.332605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.332626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.332640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.332666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.345633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.346114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.346156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.346172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.346465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.346773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.346794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.346823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.346836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.359945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.360369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.360411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.360427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.360743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.361011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.361031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.361043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.361054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.374227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.374670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.374698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.374714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.375009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.375258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.375277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.375289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.375300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.388493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.388920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.388947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.388979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.389271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.389548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.389583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.389596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.389608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.402712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.403134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.403161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.403176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.403465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.403774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.797 [2024-11-17 11:30:12.403796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.797 [2024-11-17 11:30:12.403809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.797 [2024-11-17 11:30:12.403837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.797 [2024-11-17 11:30:12.416959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.797 [2024-11-17 11:30:12.417390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.797 [2024-11-17 11:30:12.417418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.797 [2024-11-17 11:30:12.417439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.797 [2024-11-17 11:30:12.417721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.797 [2024-11-17 11:30:12.418011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.798 [2024-11-17 11:30:12.418030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.798 [2024-11-17 11:30:12.418043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.798 [2024-11-17 11:30:12.418054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.798 [2024-11-17 11:30:12.431156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.798 [2024-11-17 11:30:12.431583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.798 [2024-11-17 11:30:12.431611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.798 [2024-11-17 11:30:12.431627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.798 [2024-11-17 11:30:12.431921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.798 [2024-11-17 11:30:12.432170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.798 [2024-11-17 11:30:12.432188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.798 [2024-11-17 11:30:12.432200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.798 [2024-11-17 11:30:12.432212] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.798 [2024-11-17 11:30:12.445360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.798 [2024-11-17 11:30:12.445789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:47.798 [2024-11-17 11:30:12.445817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:47.798 [2024-11-17 11:30:12.445833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:47.798 [2024-11-17 11:30:12.446118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:47.798 [2024-11-17 11:30:12.446437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.798 [2024-11-17 11:30:12.446475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.798 [2024-11-17 11:30:12.446488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.798 [2024-11-17 11:30:12.446500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.459932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.460317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.460345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.460361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.460657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.460957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.460977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.460989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.461000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.474114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.474598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.474626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.474641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.474936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.475186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.475205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.475217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.475228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.488414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.488836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.488863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.488879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.489175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.489424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.489442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.489454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.489466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.502730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.503139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.503166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.503181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.503471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.503773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.503794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.503812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.503839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.516979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.517345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.517386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.517402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.517709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.517998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.518017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.518029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.518040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.531183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.531738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.531767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.531783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.532075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.532324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.532343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.532355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.532366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.545314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.545786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.058 [2024-11-17 11:30:12.545814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.058 [2024-11-17 11:30:12.545830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.058 [2024-11-17 11:30:12.546123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.058 [2024-11-17 11:30:12.546372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.058 [2024-11-17 11:30:12.546391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.058 [2024-11-17 11:30:12.546404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.058 [2024-11-17 11:30:12.546415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.058 [2024-11-17 11:30:12.559591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.058 [2024-11-17 11:30:12.560022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.560064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.560081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.560373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.560669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.560691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.560718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.560731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.573882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.574305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.574333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.574349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.574641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.574935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.574954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.574966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.574978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.588077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.588596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.588624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.588640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.588936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.589186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.589204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.589216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.589227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.602369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.602776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.602804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.602825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.603123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.603372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.603391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.603403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.603415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.616933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.617394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.617436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.617452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.617756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.618026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.618044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.618057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.618068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.631248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.631673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.631702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.631718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.632013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.632262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.632280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.632292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.632304] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.645533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.646017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.646058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.646075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.646385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.646687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.646708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.646721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.646733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.659629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.660103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.660119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.660412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.660694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.660715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.660728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.660740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.673982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.674378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.674405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.059 [2024-11-17 11:30:12.674420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.059 [2024-11-17 11:30:12.674725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.059 [2024-11-17 11:30:12.674993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.059 [2024-11-17 11:30:12.675012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.059 [2024-11-17 11:30:12.675024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.059 [2024-11-17 11:30:12.675035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.059 [2024-11-17 11:30:12.688245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.059 [2024-11-17 11:30:12.688646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.059 [2024-11-17 11:30:12.688673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.060 [2024-11-17 11:30:12.688689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.060 [2024-11-17 11:30:12.688988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.060 [2024-11-17 11:30:12.689237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.060 [2024-11-17 11:30:12.689255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.060 [2024-11-17 11:30:12.689273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.060 [2024-11-17 11:30:12.689284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.060 [2024-11-17 11:30:12.702441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.060 [2024-11-17 11:30:12.702870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.060 [2024-11-17 11:30:12.702897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.060 [2024-11-17 11:30:12.702914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.060 [2024-11-17 11:30:12.703209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.060 [2024-11-17 11:30:12.703458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.060 [2024-11-17 11:30:12.703476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.060 [2024-11-17 11:30:12.703488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.060 [2024-11-17 11:30:12.703500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 [2024-11-17 11:30:12.716775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.717168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.717196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.717212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.320 [2024-11-17 11:30:12.717501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.320 [2024-11-17 11:30:12.717808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.320 [2024-11-17 11:30:12.717830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.320 [2024-11-17 11:30:12.717843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.320 [2024-11-17 11:30:12.717856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 [2024-11-17 11:30:12.730940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.731298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.731339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.731355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.320 [2024-11-17 11:30:12.731647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.320 [2024-11-17 11:30:12.731945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.320 [2024-11-17 11:30:12.731964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.320 [2024-11-17 11:30:12.731976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.320 [2024-11-17 11:30:12.731987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 [2024-11-17 11:30:12.745103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.745594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.745622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.745638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.320 [2024-11-17 11:30:12.745932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.320 [2024-11-17 11:30:12.746181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.320 [2024-11-17 11:30:12.746199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.320 [2024-11-17 11:30:12.746212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.320 [2024-11-17 11:30:12.746223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 [2024-11-17 11:30:12.759366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.759838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.759865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.759881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.320 [2024-11-17 11:30:12.760175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.320 [2024-11-17 11:30:12.760424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.320 [2024-11-17 11:30:12.760442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.320 [2024-11-17 11:30:12.760455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.320 [2024-11-17 11:30:12.760466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 [2024-11-17 11:30:12.773627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.774007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.774048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.774064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.320 [2024-11-17 11:30:12.774353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.320 [2024-11-17 11:30:12.774650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.320 [2024-11-17 11:30:12.774671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.320 [2024-11-17 11:30:12.774699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.320 [2024-11-17 11:30:12.774711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 398531 Killed "${NVMF_APP[@]}" "$@" 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.320 [2024-11-17 11:30:12.787886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.788248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.788292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.788308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=399991 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:48.320 [2024-11-17 11:30:12.788618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 399991 00:35:48.320 [2024-11-17 11:30:12.788899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.320 [2024-11-17 11:30:12.788919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.320 [2024-11-17 11:30:12.788931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.320 [2024-11-17 11:30:12.788943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 399991 ']' 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.320 11:30:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.320 [2024-11-17 11:30:12.802171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.320 [2024-11-17 11:30:12.802617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.320 [2024-11-17 11:30:12.802646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.320 [2024-11-17 11:30:12.802662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.802960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.803209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.803227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.803239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.803251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.816599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.817059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.817087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.817103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.817398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.817685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.817707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.817720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.817732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.830737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.831306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.831349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.831365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.831669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.831940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.831959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.831971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.831982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.838232] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:35:48.321 [2024-11-17 11:30:12.838307] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.321 [2024-11-17 11:30:12.845088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.845596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.845625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.845641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.845933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.846182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.846201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.846213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.846224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.859394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.859870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.859899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.859915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.860230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.860489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.860542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.860559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.860572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.873741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.874120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.874162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.874177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.874467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.874752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.874773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.874786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.874797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.888061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.888678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.888707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.888723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.889022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.889279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.889299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.889312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.889323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.902532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.902980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.903008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.903028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.903327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.903629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.903650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.903664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.903676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.912610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:48.321 [2024-11-17 11:30:12.916937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.917351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.917381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.917398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.917699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.917978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.917999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.918014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.321 [2024-11-17 11:30:12.918027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.321 [2024-11-17 11:30:12.931305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.321 [2024-11-17 11:30:12.931933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.321 [2024-11-17 11:30:12.931986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.321 [2024-11-17 11:30:12.932007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.321 [2024-11-17 11:30:12.932311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.321 [2024-11-17 11:30:12.932618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.321 [2024-11-17 11:30:12.932641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.321 [2024-11-17 11:30:12.932657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.322 [2024-11-17 11:30:12.932672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.322 [2024-11-17 11:30:12.945771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.322 [2024-11-17 11:30:12.946202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.322 [2024-11-17 11:30:12.946232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.322 [2024-11-17 11:30:12.946248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.322 [2024-11-17 11:30:12.946559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.322 [2024-11-17 11:30:12.946861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.322 [2024-11-17 11:30:12.946882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.322 [2024-11-17 11:30:12.946896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.322 [2024-11-17 11:30:12.946924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.322 [2024-11-17 11:30:12.958750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.322 [2024-11-17 11:30:12.958801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.322 [2024-11-17 11:30:12.958815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.322 [2024-11-17 11:30:12.958825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.322 [2024-11-17 11:30:12.958848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.322 [2024-11-17 11:30:12.960206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.322 [2024-11-17 11:30:12.960206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:48.322 [2024-11-17 11:30:12.960271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:48.322 [2024-11-17 11:30:12.960274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:48.322 [2024-11-17 11:30:12.960702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.322 [2024-11-17 11:30:12.960733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.322 [2024-11-17 11:30:12.960750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.322 [2024-11-17 11:30:12.961044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.322 [2024-11-17 11:30:12.961311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.322 [2024-11-17 11:30:12.961332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.322 [2024-11-17 11:30:12.961347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.322 [2024-11-17 11:30:12.961360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:12.974998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:12.975551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:12.975597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:12.975622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:12.975918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:12.976196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:12.976217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:12.976234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.581 [2024-11-17 11:30:12.976249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:12.989478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:12.990129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:12.990169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:12.990189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:12.990484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:12.990789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:12.990812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:12.990843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.581 [2024-11-17 11:30:12.990859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:13.004108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:13.004705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:13.004746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:13.004768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:13.005062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:13.005335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:13.005356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:13.005372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.581 [2024-11-17 11:30:13.005388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:13.018675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:13.019265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:13.019302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:13.019321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:13.019610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:13.019906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:13.019927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:13.019943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.581 [2024-11-17 11:30:13.019959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:13.033232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:13.033852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:13.033892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:13.033922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:13.034218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:13.034490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:13.034535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:13.034554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.581 [2024-11-17 11:30:13.034571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:13.047809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:13.048257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:13.048287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:13.048304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:13.048587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:13.048876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:13.048897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:13.048910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.581 [2024-11-17 11:30:13.048923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.581 [2024-11-17 11:30:13.062392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.581 [2024-11-17 11:30:13.062800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.581 [2024-11-17 11:30:13.062828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.581 [2024-11-17 11:30:13.062845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.581 [2024-11-17 11:30:13.063116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.581 [2024-11-17 11:30:13.063391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.581 [2024-11-17 11:30:13.063412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.581 [2024-11-17 11:30:13.063426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.063439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.582 [2024-11-17 11:30:13.077045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.582 [2024-11-17 11:30:13.077454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.582 [2024-11-17 11:30:13.077490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.582 [2024-11-17 11:30:13.077507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.582 [2024-11-17 11:30:13.077785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.582 [2024-11-17 11:30:13.078061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.582 [2024-11-17 11:30:13.078083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.582 [2024-11-17 11:30:13.078098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.078110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 [2024-11-17 11:30:13.091815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.582 [2024-11-17 11:30:13.092224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.582 [2024-11-17 11:30:13.092255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.582 [2024-11-17 11:30:13.092271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.582 [2024-11-17 11:30:13.092583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.582 [2024-11-17 11:30:13.092859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.582 [2024-11-17 11:30:13.092881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.582 [2024-11-17 11:30:13.092896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.092909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.582 [2024-11-17 11:30:13.098340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.582 [2024-11-17 11:30:13.106281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.582 [2024-11-17 11:30:13.106731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.582 [2024-11-17 11:30:13.106759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.582 [2024-11-17 11:30:13.106776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.582 [2024-11-17 11:30:13.107060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.582 [2024-11-17 11:30:13.107326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.582 [2024-11-17 11:30:13.107351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.582 [2024-11-17 11:30:13.107365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.107378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 [2024-11-17 11:30:13.120742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.582 [2024-11-17 11:30:13.121280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.582 [2024-11-17 11:30:13.121315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.582 [2024-11-17 11:30:13.121334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.582 [2024-11-17 11:30:13.121623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.582 [2024-11-17 11:30:13.121932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.582 [2024-11-17 11:30:13.121953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.582 [2024-11-17 11:30:13.121967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.121982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 [2024-11-17 11:30:13.135311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.582 [2024-11-17 11:30:13.135815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.582 [2024-11-17 11:30:13.135858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.582 [2024-11-17 11:30:13.135879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.582 [2024-11-17 11:30:13.136158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.582 [2024-11-17 11:30:13.136438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.582 [2024-11-17 11:30:13.136460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.582 [2024-11-17 11:30:13.136476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.136491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 Malloc0 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.582 [2024-11-17 11:30:13.149953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.582 [2024-11-17 11:30:13.150361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:48.582 [2024-11-17 11:30:13.150389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbccf0 with addr=10.0.0.2, port=4420 00:35:48.582 [2024-11-17 11:30:13.150419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbccf0 is same with the state(6) to be set 00:35:48.582 [2024-11-17 11:30:13.150701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbccf0 (9): Bad file descriptor 00:35:48.582 [2024-11-17 11:30:13.150988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:48.582 [2024-11-17 11:30:13.151009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:48.582 [2024-11-17 11:30:13.151022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:48.582 [2024-11-17 11:30:13.151034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.582 [2024-11-17 11:30:13.157658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.582 11:30:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 398819 00:35:48.582 [2024-11-17 11:30:13.164585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:48.840 3613.17 IOPS, 14.11 MiB/s [2024-11-17T10:30:13.498Z] [2024-11-17 11:30:13.279171] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:50.707 4270.29 IOPS, 16.68 MiB/s [2024-11-17T10:30:16.299Z] 4819.00 IOPS, 18.82 MiB/s [2024-11-17T10:30:17.673Z] 5258.89 IOPS, 20.54 MiB/s [2024-11-17T10:30:18.607Z] 5603.70 IOPS, 21.89 MiB/s [2024-11-17T10:30:19.540Z] 5889.45 IOPS, 23.01 MiB/s [2024-11-17T10:30:20.473Z] 6118.92 IOPS, 23.90 MiB/s [2024-11-17T10:30:21.406Z] 6322.31 IOPS, 24.70 MiB/s [2024-11-17T10:30:22.340Z] 6489.07 IOPS, 25.35 MiB/s [2024-11-17T10:30:22.340Z] 6629.00 IOPS, 25.89 MiB/s 00:35:57.682 Latency(us) 00:35:57.682 [2024-11-17T10:30:22.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:57.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:57.682 Verification LBA range: start 0x0 length 0x4000 00:35:57.682 Nvme1n1 : 15.01 6631.80 25.91 7791.08 0.00 8847.96 655.36 21456.97 00:35:57.682 [2024-11-17T10:30:22.340Z] =================================================================================================================== 00:35:57.682 [2024-11-17T10:30:22.340Z] Total : 6631.80 25.91 7791.08 0.00 8847.96 655.36 21456.97 00:35:57.940 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.941 rmmod nvme_tcp 00:35:57.941 rmmod nvme_fabrics 00:35:57.941 rmmod nvme_keyring 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 399991 ']' 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 399991 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 399991 ']' 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 399991 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399991 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399991' 00:35:57.941 killing process with pid 399991 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 399991 00:35:57.941 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 399991 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.199 11:30:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.734 00:36:00.734 real 0m22.543s 00:36:00.734 user 1m0.318s 00:36:00.734 sys 0m4.214s 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:00.734 ************************************ 00:36:00.734 END TEST nvmf_bdevperf 00:36:00.734 ************************************ 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:00.734 ************************************ 00:36:00.734 START TEST nvmf_target_disconnect 00:36:00.734 ************************************ 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:00.734 * Looking for test storage... 00:36:00.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.734 11:30:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.734 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:00.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.734 --rc genhtml_branch_coverage=1 00:36:00.735 --rc genhtml_function_coverage=1 00:36:00.735 --rc genhtml_legend=1 00:36:00.735 --rc geninfo_all_blocks=1 00:36:00.735 --rc geninfo_unexecuted_blocks=1 00:36:00.735 00:36:00.735 ' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:00.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.735 --rc genhtml_branch_coverage=1 00:36:00.735 --rc genhtml_function_coverage=1 00:36:00.735 --rc genhtml_legend=1 00:36:00.735 --rc geninfo_all_blocks=1 00:36:00.735 --rc geninfo_unexecuted_blocks=1 00:36:00.735 00:36:00.735 ' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:00.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.735 --rc genhtml_branch_coverage=1 00:36:00.735 --rc genhtml_function_coverage=1 00:36:00.735 --rc genhtml_legend=1 00:36:00.735 --rc geninfo_all_blocks=1 00:36:00.735 --rc geninfo_unexecuted_blocks=1 00:36:00.735 00:36:00.735 ' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:00.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.735 --rc genhtml_branch_coverage=1 00:36:00.735 --rc genhtml_function_coverage=1 00:36:00.735 --rc genhtml_legend=1 00:36:00.735 --rc geninfo_all_blocks=1 00:36:00.735 --rc geninfo_unexecuted_blocks=1 00:36:00.735 00:36:00.735 ' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:00.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:00.735 11:30:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:02.640 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:02.640 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.640 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:02.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:02.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:36:02.641 00:36:02.641 --- 10.0.0.2 ping statistics --- 00:36:02.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.641 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:36:02.641 00:36:02.641 --- 10.0.0.1 ping statistics --- 00:36:02.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.641 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:02.641 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:02.901 ************************************ 00:36:02.901 START TEST nvmf_target_disconnect_tc1 00:36:02.901 ************************************ 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.901 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:02.902 [2024-11-17 11:30:27.407752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:02.902 [2024-11-17 11:30:27.407835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa90 with addr=10.0.0.2, port=4420 00:36:02.902 [2024-11-17 11:30:27.407867] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:02.902 [2024-11-17 11:30:27.407891] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:02.902 [2024-11-17 11:30:27.407905] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:02.902 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:02.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:02.902 Initializing NVMe Controllers 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:02.902 00:36:02.902 real 0m0.097s 00:36:02.902 user 0m0.042s 00:36:02.902 sys 0m0.054s 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:02.902 ************************************ 00:36:02.902 END TEST nvmf_target_disconnect_tc1 00:36:02.902 ************************************ 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:02.902 ************************************ 00:36:02.902 START TEST nvmf_target_disconnect_tc2 00:36:02.902 ************************************ 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403148 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403148 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403148 ']' 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.902 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.902 [2024-11-17 11:30:27.521801] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:02.902 [2024-11-17 11:30:27.521903] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.161 [2024-11-17 11:30:27.595379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.161 [2024-11-17 11:30:27.642427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.161 [2024-11-17 11:30:27.642483] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.161 [2024-11-17 11:30:27.642511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.161 [2024-11-17 11:30:27.642522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.161 [2024-11-17 11:30:27.642539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.161 [2024-11-17 11:30:27.644072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:03.161 [2024-11-17 11:30:27.644136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:03.161 [2024-11-17 11:30:27.644167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:03.161 [2024-11-17 11:30:27.644169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:03.161 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.161 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:03.161 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:03.161 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.161 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 Malloc0 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 [2024-11-17 11:30:27.859006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 [2024-11-17 11:30:27.887278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=403173 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.419 11:30:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:05.327 11:30:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 403148 00:36:05.327 11:30:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 [2024-11-17 11:30:29.911740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Write completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 [2024-11-17 11:30:29.912023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.327 Read completed with error (sct=0, sc=8) 00:36:05.327 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Write completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 Read completed with error (sct=0, sc=8) 00:36:05.328 starting I/O failed 00:36:05.328 [2024-11-17 11:30:29.912358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:05.328 [2024-11-17 11:30:29.912582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.912622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.912753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.912781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.912908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.912935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.913862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.913888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.914899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.914925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.915903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.915928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.916051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.916076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.916172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.916197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.328 [2024-11-17 11:30:29.916289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.328 [2024-11-17 11:30:29.916315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.328 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.916408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.916434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.916576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.916603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.916744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.916770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.916861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.916887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.917881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.917906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.918945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.918971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.919918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.919942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.920910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.920937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.921021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.921049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.921134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.921159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.921272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.329 [2024-11-17 11:30:29.921298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.329 qpair failed and we were unable to recover it. 00:36:05.329 [2024-11-17 11:30:29.921446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.921471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.921563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.921589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.921679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.921704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.921831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.921856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.921932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.921956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.922947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.923057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.923087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.923204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.923230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.923315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.923342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Write completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 Read completed with error (sct=0, sc=8) 00:36:05.330 starting I/O failed 00:36:05.330 [2024-11-17 11:30:29.923663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:05.330 [2024-11-17 11:30:29.923743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.923782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.923931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.923959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.924917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.924994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.925020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.925130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.330 [2024-11-17 11:30:29.925155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.330 qpair failed and we were unable to recover it. 00:36:05.330 [2024-11-17 11:30:29.925277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.925317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.925430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.925457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.925578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.925607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.925701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.925727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.925870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.925896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.925975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.926087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.926238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.926398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.926558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.926714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.926896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.926924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.927972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.927999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.928898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.928924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.929964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.929990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.331 qpair failed and we were unable to recover it. 00:36:05.331 [2024-11-17 11:30:29.930101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.331 [2024-11-17 11:30:29.930128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.930221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.930248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.930361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.930387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.930494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.930628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.930654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.930768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.930793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.930872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.930898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.931871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.931896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.932934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.932960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.933909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.933935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.934018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.934044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.934124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.934149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.934240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.934266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.934358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.934385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.332 qpair failed and we were unable to recover it. 00:36:05.332 [2024-11-17 11:30:29.934464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.332 [2024-11-17 11:30:29.934488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.934639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.934665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.934749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.934777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.934861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.934887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.935949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.935975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.936923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.936949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.937877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.938924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.938950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.939062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.939088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.939167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.939193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.333 [2024-11-17 11:30:29.939310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.333 [2024-11-17 11:30:29.939336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.333 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.939421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.939446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.939536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.939562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.939646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.939671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.939755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.939780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.939891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.939915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.939990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.940939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.940965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.941102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.941258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.941395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.941544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.941714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.941849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.941992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.942899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.942925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.943965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.943990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.944070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.944096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.944213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.334 [2024-11-17 11:30:29.944237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.334 qpair failed and we were unable to recover it. 00:36:05.334 [2024-11-17 11:30:29.944332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.944357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.944483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.944529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.944617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.944645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.944728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.944756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.944897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.944923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.945883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.945908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.946847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.946990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.947938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.947964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.948944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.948968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.949058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.949084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.949211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.949239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.949359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.335 [2024-11-17 11:30:29.949385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.335 qpair failed and we were unable to recover it. 00:36:05.335 [2024-11-17 11:30:29.949500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.949531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.949648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.949673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.949787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.949813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.949920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.949946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.950931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.950958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.951969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.951995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.952894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.952920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.953859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.336 [2024-11-17 11:30:29.953885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.336 qpair failed and we were unable to recover it. 00:36:05.336 [2024-11-17 11:30:29.954098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.954154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.954376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.954430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.954536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.954564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.954708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.954734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.954880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.954906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.955961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.955988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.956131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.956157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.956256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.956287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.956441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.956468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.956599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.956638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.956753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.956781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.956893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.956919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.957896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.957922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.958965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.958991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.959072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.959099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.959205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.959231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.959330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.959369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.959492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.959520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.337 qpair failed and we were unable to recover it. 00:36:05.337 [2024-11-17 11:30:29.959611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.337 [2024-11-17 11:30:29.959636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.959742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.959768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.959906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.959931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.960926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.960951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.961906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.961995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.962920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.962946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.963958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.963983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.964064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.964090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.964200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.964226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.964344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.964372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.964482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.964509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.964641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.338 [2024-11-17 11:30:29.964679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.338 qpair failed and we were unable to recover it. 00:36:05.338 [2024-11-17 11:30:29.964795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.964822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.964920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.964946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.965909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.965995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.966866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.966981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.967920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.967946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.968863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.968978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.339 [2024-11-17 11:30:29.969890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.339 qpair failed and we were unable to recover it. 00:36:05.339 [2024-11-17 11:30:29.969967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.969993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.970882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.970995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.971872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.971989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.972888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.972914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.973911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.973938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.974862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.340 [2024-11-17 11:30:29.974888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.340 qpair failed and we were unable to recover it. 00:36:05.340 [2024-11-17 11:30:29.975002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.975908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.975986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.341 [2024-11-17 11:30:29.976011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.341 qpair failed and we were unable to recover it. 00:36:05.341 [2024-11-17 11:30:29.976091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.976931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.976957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.977861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.977982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.978010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.978113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.628 [2024-11-17 11:30:29.978138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.628 qpair failed and we were unable to recover it. 00:36:05.628 [2024-11-17 11:30:29.978251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.978278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.978387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.978415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.978504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.978545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.978656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.978682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.978807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.978922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.978949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.979943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.979968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.980913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.980939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.981914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.981939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.982098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.982146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.982345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.982393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.982522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.982575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.982663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.982688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.982809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.982834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.982918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.982944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.983090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.983116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.983282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.983308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.983422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.983447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.629 [2024-11-17 11:30:29.983542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.629 [2024-11-17 11:30:29.983568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.629 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.983653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.983680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.983809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.983835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.984044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.984094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.984290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.984348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.984462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.984493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.984600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.984626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.984737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.984763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.984909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.984964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.985931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.985957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.986954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.986982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.987127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.987297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.987430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.987595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.987769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.987906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.987983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.988860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.988886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.630 [2024-11-17 11:30:29.989015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.630 [2024-11-17 11:30:29.989042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.630 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.989137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.989163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.989247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.989274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.989466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.989492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.989612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.989639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.989755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.989781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.989897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.989923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.990948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.990975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.991904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.991930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.992884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.992909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.993887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.993989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.994015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.994094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.994120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.631 qpair failed and we were unable to recover it. 00:36:05.631 [2024-11-17 11:30:29.994230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.631 [2024-11-17 11:30:29.994256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.994397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.994423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.994542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.994584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.994735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.994762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.994904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.994951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.995153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.995214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.995378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.995405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.995552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.995665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.995691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.995781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.995807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.995921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.995948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.996936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.996961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.997870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.997896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.998966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.998991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.999104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.999129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.999247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.999277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.999369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.999395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.999520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.632 [2024-11-17 11:30:29.999553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.632 qpair failed and we were unable to recover it. 00:36:05.632 [2024-11-17 11:30:29.999665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:29.999691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:29.999802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:29.999828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:29.999912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:29.999938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.000074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.000209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.000348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.000551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.000707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.000874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.000994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.001958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.001984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.002867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.002893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.633 [2024-11-17 11:30:30.003796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.633 qpair failed and we were unable to recover it. 00:36:05.633 [2024-11-17 11:30:30.003934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.003976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.004145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.004188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.004336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.004382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.004495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.004521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.004617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.004643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.004838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.004864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.004980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.005031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.005180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.005232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.005348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.005375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.005502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.005549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.005651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.005679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.005768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.005794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.005946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.006777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.006984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.007109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.007279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.007459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.007582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.007723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.007873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.007907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.008891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.008917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.009061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.009104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.009244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.009290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.009437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.009468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.009600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.009627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.009721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.634 [2024-11-17 11:30:30.009747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.634 qpair failed and we were unable to recover it. 00:36:05.634 [2024-11-17 11:30:30.009839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.009979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.010903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.010981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.011007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.011094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.011142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.011278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.011333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.011470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.011511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.011663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.011701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.011822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.011860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.012020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.012063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.012242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.012283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.012412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.012457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.012614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.012642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.012731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.012757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.012905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.012932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.013094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.013257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.013479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.013632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.013752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.013876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.013990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.014918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.014944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.015037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.015065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.015159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.015187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.015398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.015437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.015534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.635 [2024-11-17 11:30:30.015568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.635 qpair failed and we were unable to recover it. 00:36:05.635 [2024-11-17 11:30:30.015657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.015684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.015770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.015796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.015893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.015920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.016941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.016995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.017967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.017993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.018934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.018960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.019075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.019107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.019260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.019291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.019471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.019504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.019643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.019683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.019806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.019845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.019991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.020146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.020365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.020536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.020674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.020809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.020952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.021071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.021097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.021210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.636 [2024-11-17 11:30:30.021235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.636 qpair failed and we were unable to recover it. 00:36:05.636 [2024-11-17 11:30:30.021338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.021364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.021470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.021510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.021643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.021766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.021793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.021906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.021933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.022923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.022970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.023878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.023985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.024099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.024261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.024449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.024564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.024733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.024905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.024947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.025080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.025209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.025263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.025395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.025427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.025544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.025588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.025686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.025715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.025874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.025923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.026063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.637 [2024-11-17 11:30:30.026107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.637 qpair failed and we were unable to recover it. 00:36:05.637 [2024-11-17 11:30:30.026289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.026339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.026454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.026480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.026585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.026611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.026695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.026721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.026804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.026830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.026944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.026969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.027934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.027961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.028890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.028979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.029958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.029997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.030923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.030972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.031065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.031090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.031204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.638 [2024-11-17 11:30:30.031231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.638 qpair failed and we were unable to recover it. 00:36:05.638 [2024-11-17 11:30:30.031323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.031350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.031451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.031492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.031623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.031652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.031766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.031793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.031883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.031910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.032909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.032963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.033113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.033164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.033326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.033358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.033512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.033544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.033631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.033657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.033774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.033799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.033936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.033960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.034089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.034120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.034260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.034287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.034429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.034455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.034592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.034631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.034725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.034753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.034883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.034922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.035040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.035212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.035344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.035506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.035680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.035816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.035970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.036895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.639 [2024-11-17 11:30:30.036921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.639 qpair failed and we were unable to recover it. 00:36:05.639 [2024-11-17 11:30:30.037032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.037899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.037933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.038947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.038973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.039944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.039969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.040916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.040942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.041931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.640 [2024-11-17 11:30:30.041980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.640 qpair failed and we were unable to recover it. 00:36:05.640 [2024-11-17 11:30:30.042069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.042095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.042203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.042244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.042380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.042420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.042542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.042576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.042691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.042724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.042851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.042913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.043961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.043989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.044161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.044282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.044422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.044572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.044716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.044881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.044991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.045116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.045242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.045386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.045555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.045700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.045841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.045865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.046971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.046996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.047083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.047109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.047184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.047209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.641 [2024-11-17 11:30:30.047294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.641 [2024-11-17 11:30:30.047319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.641 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.047434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.047459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.047544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.047573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.047692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.047719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.047806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.047837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.047949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.047974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.048863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.048979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.049900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.049928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.050966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.050994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.051099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.051133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.051259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.051291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.051437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.051484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.051632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.051660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.051756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.642 [2024-11-17 11:30:30.051787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.642 qpair failed and we were unable to recover it. 00:36:05.642 [2024-11-17 11:30:30.051890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.051916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.052911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.052936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.053913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.054842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.054868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.055909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.055999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.056025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.056116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.056144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.056263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.056290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.056404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.056430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.056551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.056578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.056658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.643 [2024-11-17 11:30:30.056689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.643 qpair failed and we were unable to recover it. 00:36:05.643 [2024-11-17 11:30:30.056787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.056827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.056919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.056946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.057960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.057987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.058083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.058113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.058296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.058345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.058462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.058489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.058626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.058652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.058738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.058764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.058903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.058951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.059914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.059996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.060921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.060948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.061088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.061204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.061338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.061507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.061677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.644 [2024-11-17 11:30:30.061796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.644 qpair failed and we were unable to recover it. 00:36:05.644 [2024-11-17 11:30:30.061901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.061932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.062948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.062975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.063875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.063911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.064874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.064985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.065934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.065958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.066882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.645 [2024-11-17 11:30:30.066908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.645 qpair failed and we were unable to recover it. 00:36:05.645 [2024-11-17 11:30:30.067016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.067915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.067939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.068870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.068896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.069826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.069865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.070901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.070977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.071003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.071075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.071100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.071216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.071245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.071337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.071363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.646 qpair failed and we were unable to recover it. 00:36:05.646 [2024-11-17 11:30:30.071449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.646 [2024-11-17 11:30:30.071476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.071590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.071617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.071700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.071726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.071802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.071828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.071944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.071970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.072083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.072109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.072256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.072284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.072401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.072428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.072568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.072594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.072709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.072735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.072845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.072880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.073967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.073993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.074108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.074134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.074344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.074383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.074537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.074565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.074651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.074677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.074771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.074797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.074935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.075177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.075214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.075337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.075375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.075515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.075551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.075668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.075694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.075780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.075805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.075885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.075912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.076010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.076047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.076234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.076271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.076406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.076446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.076577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.076616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.076712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.076741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.076879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.076922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.077073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.647 [2024-11-17 11:30:30.077109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.647 qpair failed and we were unable to recover it. 00:36:05.647 [2024-11-17 11:30:30.077312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.077369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.077501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.077543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.077684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.077711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.077825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.077852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.077995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.078042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.078171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.078202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.078395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.078427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.078610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.078660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.078773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.078799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.078914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.078940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.079967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.079993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.080160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.080192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.080294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.080325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.080488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.080514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.080654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.080694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.080782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.080808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.080938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.080968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.081129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.081173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.081403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.081455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.081548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.081573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.081666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.081693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.081783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.081829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.081939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.081966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.082145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.082314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.082484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.082605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.082712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.082879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.082960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.083006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.648 [2024-11-17 11:30:30.083103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.648 [2024-11-17 11:30:30.083134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.648 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.083270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.083302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.083435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.083466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.083582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.083615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.083734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.083760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.083901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.083928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.084039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.084082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.084196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.084242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.084435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.084467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.084614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.084653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.084749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.084777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.084894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.084921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.085913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.085940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.086093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.086159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.086343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.086393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.086512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.086549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.086661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.086688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.086792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.086818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.086907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.086934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.087074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.087101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.087283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.087333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.087435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.087461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.087576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.087604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.087728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.087755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.087903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.087949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.088151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.088321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.088457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.088592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.088707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.088871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.088982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.089026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.649 [2024-11-17 11:30:30.089182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.649 [2024-11-17 11:30:30.089232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.649 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.089367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.089414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.089562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.089605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.089696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.089723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.089842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.089867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.089978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.090138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.090286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.090438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.090624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.090737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.090880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.090906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.091043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.091075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.091222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.091248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.091454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.091496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.091649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.091678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.091761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.091788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.091896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.091922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.092032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.092063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.092180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.092218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.092429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.092461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.092641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.092668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.092781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.092825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.092951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.092982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.093142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.093174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.093305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.093336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.093506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.093574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.093668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.093793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.093820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.093912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.093939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.094868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.094920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.095050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.095102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.650 qpair failed and we were unable to recover it. 00:36:05.650 [2024-11-17 11:30:30.095216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.650 [2024-11-17 11:30:30.095241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.095331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.095355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.095439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.095463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.095575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.095601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.095711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.095736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.095879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.095904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.095995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.096891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.096918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.097882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.097908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.098864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.098973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.099000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.099197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.099223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.099371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.099397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.099482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.099508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.099636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.099662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.099744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.651 [2024-11-17 11:30:30.099770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.651 qpair failed and we were unable to recover it. 00:36:05.651 [2024-11-17 11:30:30.099889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.099917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.100916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.100999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.101142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.101439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.101600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.101754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.101908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.101934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.102887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.102913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.103944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.103971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.104107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.104280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.104431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.104556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.104681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.104875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.104987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.105029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.105188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.105219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.105365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.652 [2024-11-17 11:30:30.105391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.652 qpair failed and we were unable to recover it. 00:36:05.652 [2024-11-17 11:30:30.105498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.105539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.105631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.105658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.105746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.105772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.105938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.105969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.106092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.106353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.106484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.106639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.106756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.106866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.106982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.107008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.107192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.107219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.107370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.107415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.107537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.107564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.107674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.107700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.107811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.107856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.108939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.108966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.109129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.109160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.109351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.109383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.109521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.109553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.109643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.109669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.109754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.109780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.109908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.109934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.110957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.110989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.111095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.111135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.653 [2024-11-17 11:30:30.111256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.653 [2024-11-17 11:30:30.111282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.653 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.111400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.111426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.111514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.111572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.111707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.111732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.111822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.111848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.111959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.111985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.112122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.112153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.112289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.112335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.112458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.112489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.112642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.112668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.112766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.112792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.112928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.112954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.113095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.113127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.113251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.113282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.113420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.113451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.113564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.113591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.113732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.113758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.113904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.113936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.114075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.114107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.114210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.114242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.114370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.114401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.114508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.114556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.114744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.114794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.114921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.114971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.115068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.115100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.115228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.115260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.115421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.115452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.115586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.115617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.115717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.115748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.115838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.115869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.116057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.116190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.116388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.116556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.116728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.116874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.116978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.117010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.117142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.117180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.654 qpair failed and we were unable to recover it. 00:36:05.654 [2024-11-17 11:30:30.117337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.654 [2024-11-17 11:30:30.117369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.117496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.117538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.117672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.117703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.117830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.117867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.118021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.118059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.118217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.118254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.118406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.118443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.118573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.118605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.118710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.118741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.118884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.118921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.119079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.119127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.119266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.119303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.119416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.119453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.119628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.119659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.119783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.119832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.120025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.120062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.120211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.120248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.120408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.120472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.120637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.120667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.120830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.120861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.121029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.121066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.121214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.121251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.121432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.121468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.121631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.121662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.121799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.121830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.121939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.121969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.122118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.122154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.122322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.122360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.122492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.122536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.122657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.122688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.122836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.122873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.123086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.123122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.123259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.123296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.123413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.123451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.123613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.123650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.123766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.123797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.123962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.123992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.124124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.655 [2024-11-17 11:30:30.124161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.655 qpair failed and we were unable to recover it. 00:36:05.655 [2024-11-17 11:30:30.124305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.124368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.124548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.124579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.124739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.124770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.124963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.124999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.125145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.125181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.125307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.125359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.125501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.125555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.125737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.125767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.125930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.125966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.126168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.126205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.126363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.126399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.126518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.126583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.126711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.126742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.126864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.126895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.126993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.127040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.127149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.127185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.127293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.127330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.127488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.127518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.127635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.127666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.127848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.127885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.128013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.128050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.128164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.128212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.128393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.128429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.128588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.128621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.128781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.128827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.128981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.129012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.129102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.129133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.129294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.129332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.129450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.129539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.129695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.129725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.129855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.129903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.130044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.130081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.130261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.656 [2024-11-17 11:30:30.130297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.656 qpair failed and we were unable to recover it. 00:36:05.656 [2024-11-17 11:30:30.130554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.130604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.130747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.130777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.130951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.130987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.131168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.131212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.131387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.131424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.131551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.131600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.131696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.131727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.131883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.131920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.132052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.132083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.132216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.132246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.132384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.132414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.132589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.132626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.132748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.132785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.132967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.133004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.133184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.133223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.133379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.133419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.133576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.133615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.133758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.133797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.133924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.133962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.134148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.134186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.134371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.134409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.134594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.134633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.134759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.134798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.134946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.134985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.135140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.135179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.135341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.135379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.135536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.135575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.135749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.135787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.135955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.135993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.136150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.136188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.136350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.136389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.136617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.136745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.136953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.136991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.137142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.137180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.137342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.137379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.137499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.137548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.657 [2024-11-17 11:30:30.137686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.657 [2024-11-17 11:30:30.137725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.657 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.137852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.137890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.138050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.138088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.138202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.138241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.138410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.138448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.138609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.138650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.138798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.138837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.138981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.139019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.139176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.139216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.139402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.139440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.139627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.139666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.139858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.139897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.140028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.140066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.140224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.140262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.140450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.140488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.140650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.140690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.140856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.140896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.141088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.141127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.141257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.141296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.141486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.141534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.141674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.141712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.141877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.141916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.142074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.142111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.142343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.142407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.142616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.142655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.142812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.142850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.142985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.143024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.143155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.143195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.143351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.143390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.143581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.143620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.143740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.143778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.143937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.143975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.144127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.144166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.144354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.144398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.144546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.144585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.144731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.144770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.144926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.144965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.145120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.658 [2024-11-17 11:30:30.145158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.658 qpair failed and we were unable to recover it. 00:36:05.658 [2024-11-17 11:30:30.145309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.145349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.145499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.145545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.145668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.145706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.145860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.145897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.146045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.146083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.146229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.146268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.146428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.146576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.146615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.146769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.146808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.146965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.147003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.147145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.147184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.147347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.147387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.147542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.147582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.147697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.147736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.147926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.147966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.148123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.148163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.148326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.148367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.148499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.148550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.148703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.148744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.148910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.148950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.149085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.149123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.149274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.149312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.149462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.149500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.149674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.149712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.149869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.149907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.150120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.150160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.150355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.150395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.150602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.150643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.150816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.150856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.151018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.151058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.151253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.151292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.151487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.151537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.151708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.151748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.151902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.151942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.152100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.152141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.152336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.152383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.152514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.152566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.152737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.659 [2024-11-17 11:30:30.152777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.659 qpair failed and we were unable to recover it. 00:36:05.659 [2024-11-17 11:30:30.152945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.152985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.153152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.153191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.153365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.153406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.153579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.153620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.153764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.153804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.153941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.153984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.154154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.154195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.154323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.154363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.154555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.154597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.154756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.154798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.155006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.155046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.155226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.155266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.155458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.155498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.155632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.155672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.155803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.155844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.156040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.156079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.156215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.156256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.156394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.156435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.156608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.156650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.156783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.156824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.156987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.157029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.157182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.157222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.157379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.157420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.157585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.157627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.157793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.157834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.158014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.158055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.158250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.158292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.158443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.158506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.158826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.158891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.159179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.159243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.159390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.159470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.159629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.159670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.159826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.660 [2024-11-17 11:30:30.159867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.660 qpair failed and we were unable to recover it. 00:36:05.660 [2024-11-17 11:30:30.159981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.160022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.160189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.160229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.160448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.160513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.160703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.160744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.160884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.160932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.161098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.161139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.161305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.161346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.161513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.161568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.161700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.161740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.161881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.161921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.162061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.162103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.162241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.162281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.162417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.162459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.162603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.162645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.162842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.162885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.163057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.163100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.163245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.163290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.163480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.163557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.163715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.163760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.163905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.163948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.164081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.164125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.164306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.164367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.164546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.164590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.164762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.164806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.164935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.164978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.165133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.165176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.165343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.165386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.165502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.165559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.165693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.165736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.165914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.165958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.166117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.166160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.166339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.166383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.166568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.166614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.166822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.166865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.167038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.167082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.167257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.167437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.167484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.167652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.661 [2024-11-17 11:30:30.167696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.661 qpair failed and we were unable to recover it. 00:36:05.661 [2024-11-17 11:30:30.167868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.167912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.168083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.168125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.168295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.168338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.168510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.168565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.168705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.168748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.168891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.168934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.169099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.169151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.169313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.169356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.169498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.169552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.169724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.169768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.169888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.169931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.170064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.170106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.170244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.170288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.170467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.170510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.170661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.170704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.170841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.170884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.171048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.171092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.171271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.171314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.171442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.171486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.171679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.171723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.171861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.171905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.172080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.172123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.172298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.172341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.172460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.172504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.172698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.172741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.172919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.172962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.173165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.173207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.173411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.173470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.173638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.173682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.173852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.173895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.174022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.174064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.174202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.174246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.174439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.174494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.174744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.174786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.174950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.174992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.175130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.175174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.175313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.662 [2024-11-17 11:30:30.175357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.662 qpair failed and we were unable to recover it. 00:36:05.662 [2024-11-17 11:30:30.175492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.175550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.175692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.175738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.175944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.175987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.176124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.176168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.176313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.176357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.176517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.176574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.176715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.176760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.176933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.176977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.177158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.177200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.177418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.177468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.177663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.177708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.177855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.177897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.178032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.178076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.178210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.178253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.178432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.178474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.178635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.178679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.178810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.178853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.179019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.179062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.179240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.179284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.179427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.179469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.179658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.179702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.179854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.179897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.180069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.180112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.180290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.180333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.180469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.180513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.180688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.180731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.180904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.180947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.181094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.181137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.181308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.181351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.181570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.181779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.181822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.181985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.182027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.182209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.182379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.182423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.182556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.182615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.182748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.182793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.183010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.183054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.183194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.663 [2024-11-17 11:30:30.183238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.663 qpair failed and we were unable to recover it. 00:36:05.663 [2024-11-17 11:30:30.183371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.183414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.183595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.183640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.183822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.183866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.184041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.184084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.184293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.184462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.184505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.184666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.184709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.184884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.184927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.185065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.185108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.185292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.185349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.185544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.185588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.185747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.185797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.185945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.185988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.186156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.186199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.186370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.186413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.186574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.186619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.186764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.186807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.186982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.187025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.187203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.187246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.187452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.187495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.187678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.187722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.187849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.187892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.188024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.188067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.188193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.188236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.188404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.188447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.188680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.188724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.188851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.188895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.189056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.189099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.189269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.189312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.189452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.189496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.189664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.189707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.189923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.189966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.190137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.190180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.190330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.190372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.190512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.190586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.190753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.190796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.190932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.190975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.191108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.664 [2024-11-17 11:30:30.191150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.664 qpair failed and we were unable to recover it. 00:36:05.664 [2024-11-17 11:30:30.191334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.191377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.191501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.191560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.191683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.191725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.191897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.191939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.192056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.192099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.192232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.192275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.192434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.192477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.192633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.192679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.192800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.192843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.192988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.193031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.193236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.193278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.193405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.193448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.193648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.193692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.193870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.193920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.194090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.194132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.194297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.194340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.194466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.194509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.194688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.194731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.194847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.194890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.195063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.195105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.195233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.195277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.195446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.195490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.195648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.195691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.195899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.195945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.196114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.196160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.196290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.196356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.196552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.196600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.196786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.196833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.197020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.197063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.197203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.197422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.197464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.197637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.197682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.197838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.665 [2024-11-17 11:30:30.197881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.665 qpair failed and we were unable to recover it. 00:36:05.665 [2024-11-17 11:30:30.198112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.198157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.198308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.198352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.198489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.198559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.198719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.198765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.198945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.198991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.199175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.199221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.199374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.199421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.199587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.199634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.199822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.199868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.200008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.200054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.200240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.200285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.200432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.200481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.200637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.200684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.200818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.200865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.200993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.201039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.201217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.201263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.201423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.201469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.201672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.201720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.201869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.201917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.202065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.202112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.202268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.202320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.202547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.202594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.202767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.202813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.202962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.203007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.203159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.203205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.203378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.203424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.203602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.203648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.203777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.203829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.204014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.204060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.204240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.204286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.204436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.204482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.204722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.204769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.204990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.205035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.205215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.205261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.205461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.205507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.205654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.205700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.205853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.205898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.206066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.666 [2024-11-17 11:30:30.206111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.666 qpair failed and we were unable to recover it. 00:36:05.666 [2024-11-17 11:30:30.206253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.206298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.206481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.206537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.206693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.206741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.206924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.206969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.207092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.207137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.207277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.207325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.207550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.207597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.207767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.207813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.207983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.208027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.208202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.208248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.208387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.208434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.208637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.208683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.208881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.208927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.209076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.209123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.209264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.209309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.209470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.209515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.209709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.209755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.209954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.209999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.210173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.210218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.210346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.210391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.210519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.210603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.210826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.210872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.211055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.211108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.211353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.211403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.211586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.211633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.211823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.211870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.212024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.212067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.212233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.212278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.212463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.212509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.212714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.212759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.212937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.212984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.213130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.213176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.213321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.213366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.213555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.213601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.213764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.213810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.213961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.214008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.214198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.214244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.667 [2024-11-17 11:30:30.214418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.667 [2024-11-17 11:30:30.214463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.667 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.214678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.214726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.214897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.214943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.215086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.215131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.215278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.215323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.215477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.215728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.215774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.215919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.215964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.216105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.216152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.216288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.216334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.216468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.216511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.216715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.216761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.216912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.216957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.217101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.217149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.217333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.217379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.217537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.217584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.217743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.217788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.217984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.218162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.218208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.218435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.218487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.218698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.218744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.218923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.218979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.219157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.219203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.219357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.219404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.219594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.219641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.219817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.219872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.220061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.220107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.220280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.220325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.220494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.220551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.220726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.220772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.220957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.221003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.221178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.221223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.221375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.221421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.221563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.221609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.221741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.221787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.221941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.221986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.222164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.222209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.222427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.222472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.668 qpair failed and we were unable to recover it. 00:36:05.668 [2024-11-17 11:30:30.222690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.668 [2024-11-17 11:30:30.222736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.222926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.222977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.223158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.223211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.223468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.223746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.223817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.224026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.224077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.224278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.224346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.224517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.224581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.224793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.224839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.224988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.225033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.225190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.225237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.225420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.225466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.225685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.225732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.225886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.225933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.226135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.226180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.226322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.226368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.226581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.226628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.226814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.226859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.227001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.227046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.227234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.227279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.227464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.227509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.227677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.227724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.227857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.227902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.228114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.228159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.228303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.228348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.228534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.228581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.228757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.228803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.228989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.229041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.229186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.229232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.229373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.229571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.229617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.229780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.229826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.230039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.230085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.230227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.230272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.669 qpair failed and we were unable to recover it. 00:36:05.669 [2024-11-17 11:30:30.230426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.669 [2024-11-17 11:30:30.230470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.230653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.230700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.230884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.230930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.231118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.231164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.231343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.231391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.231572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.231619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.231796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.231845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.231996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.232043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.232187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.232232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.232411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.232458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.232649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.232696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.232900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.233041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.233088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.233222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.233268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.233448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.233495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.233631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.233677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.233825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.233873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.234013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.234058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.234231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.234276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.234427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.234474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.234709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.234756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.234914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.234959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.235140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.235184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.235310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.235355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.235501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.235561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.235779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.235824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.235982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.236026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.236159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.236206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.236352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.236398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.236594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.236641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.236831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.236876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.237059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.237104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.237294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.237341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.237491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.237554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.237702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.237748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.237875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.237920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.238059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.238106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.238293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.670 [2024-11-17 11:30:30.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.670 qpair failed and we were unable to recover it. 00:36:05.670 [2024-11-17 11:30:30.238514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.238598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.238751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.238796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.239002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.239047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.239267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.239313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.239458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.239522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.239748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.239796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.239975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.240023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.240213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.240263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.240445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.240493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.240673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.240719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.240900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.240946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.241159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.241205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.241367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.241412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.241597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.241643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.241797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.241844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.242013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.242058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.242202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.242249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.242438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.242484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.242705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.242754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.242911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.242959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.243126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.243174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.243359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.243406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.243571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.243622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.243780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.243827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.244024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.244071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.244229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.244277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.244435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.244484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.244721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.244769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.244928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.244973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.245108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.245155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.245393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.245442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.245692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.245763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.245935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.246000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.246175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.246239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.246489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.246591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.246775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.246820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.247025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.671 [2024-11-17 11:30:30.247073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.671 qpair failed and we were unable to recover it. 00:36:05.671 [2024-11-17 11:30:30.247261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.247310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.247470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.247518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.247706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.247756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.247917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.247965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.248138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.248185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.248376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.248426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.248620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.248669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.248822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.248871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.249063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.249112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.249304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.249351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.249582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.249631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.249785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.249833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.250022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.250069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.250269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.250316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.250492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.250553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.250710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.250758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.250911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.250968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.251181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.251245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.251473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.251555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.251831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.251885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.252076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.252126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.252327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.252376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.252563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.252614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.252756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.252805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.253050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.253098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.253259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.253317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.253515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.253578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.253731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.253781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.253941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.253998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.254204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.254515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.254612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.254812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.254862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.255047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.255097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.255292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.255340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.255548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.255599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.255762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.255812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.256021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.256072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.256288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.256337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.256514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.256607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.672 [2024-11-17 11:30:30.256850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.672 [2024-11-17 11:30:30.256903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.672 qpair failed and we were unable to recover it. 00:36:05.954 [2024-11-17 11:30:30.257073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.954 [2024-11-17 11:30:30.257122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.954 qpair failed and we were unable to recover it. 00:36:05.954 [2024-11-17 11:30:30.257264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.954 [2024-11-17 11:30:30.257313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.954 qpair failed and we were unable to recover it. 00:36:05.954 [2024-11-17 11:30:30.257499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.954 [2024-11-17 11:30:30.257569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.954 qpair failed and we were unable to recover it. 00:36:05.954 [2024-11-17 11:30:30.257756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.954 [2024-11-17 11:30:30.257804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.954 qpair failed and we were unable to recover it. 00:36:05.954 [2024-11-17 11:30:30.257981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.954 [2024-11-17 11:30:30.258031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.258211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.258259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.258404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.258453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.258606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.258657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.258848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.258896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.259055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.259103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.259260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.259309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.259494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.259577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.259783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.259832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.259990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.260041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.260232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.260280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.260425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.260473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.260639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.260688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.260871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.260919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.261113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.261160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.261320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.261367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.261540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.261590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.261741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.261791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.261950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.261998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.262165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.262213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.262365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.262413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.262606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.262664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.262828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.262877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.263026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.263075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.263214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.263262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.263395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.263443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.263617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.263667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.263865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.263913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.264065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.264113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.264245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.264294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.264482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.264542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.264744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.264792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.264953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.265003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.265228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.265276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.265444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.265493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.265704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.265752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.265979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.266047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.266206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.266257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.266430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.266478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.266657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.266708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.266871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.955 [2024-11-17 11:30:30.266922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.955 qpair failed and we were unable to recover it. 00:36:05.955 [2024-11-17 11:30:30.267117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.267165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.267392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.267441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.267608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.267658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.267855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.267903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.268083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.268132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.268284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.268498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.268582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.268799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.268847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.269049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.269096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.269252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.269303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.269494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.269558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.269763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.269811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.269966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.270016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.270172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.270220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.270407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.270455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.270652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.270702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.270900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.270948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.271141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.271294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.271343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.271579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.271629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.271786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.271843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.272022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.272070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.272255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.272302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.272444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.272492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.272659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.272707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.272890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.272938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.273079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.273128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.273272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.273319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.273467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.273516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.273682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.273731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.273884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.273931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.274115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.274164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.274352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.274401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.274554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.274604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.274767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.274817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.274972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.275021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.275200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.275248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.275398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.275448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.275651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.275701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.275903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.275951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.956 [2024-11-17 11:30:30.276153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.956 [2024-11-17 11:30:30.276201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.956 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.276398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.276447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.276702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.276752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.276979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.277027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.277194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.277243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.277408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.277456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.277675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.277724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.277881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.277929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.278113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.278163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.278325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.278372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.278555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.278741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.278790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.278986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.279033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.279234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.279283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.279424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.279471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.279670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.279719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.279920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.279969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.280126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.280367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.280427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.280616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.280667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.280844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.280900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.281132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.281181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.281411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.281459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.281651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.281700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.281867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.281916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.282109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.282157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.282344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.282392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.282616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.282666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.282849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.282897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.283118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.283165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.283330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.283380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.283573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.283624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.283769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.283961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.284008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.284206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.284486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.284548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.284747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.284811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.284952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.285001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.285189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.285237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.285399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.285446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.285644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.957 [2024-11-17 11:30:30.285694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.957 qpair failed and we were unable to recover it. 00:36:05.957 [2024-11-17 11:30:30.285873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.285921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.286069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.286118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.286278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.286333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.286550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.286616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.286779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.286827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.287035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.287083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.287287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.287337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.287570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.287619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.287766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.287815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.287975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.288023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.288184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.288231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.288392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.288441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.288617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.288669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.288916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.289071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.289119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.289281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.289330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.289468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.289516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.289718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.289767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.289965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.290014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.290156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.290214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.290453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.290501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.290694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.290744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.290959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.291008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.291236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.291284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.291441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.291489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.291716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.291784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.291960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.292027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.292230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.292278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.292459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.292508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.292697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.292745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.292906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.292956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.293108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.293158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.293358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.293569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.293619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.293771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.293821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.293987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.294035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.294269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.294317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.294482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.294545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.294700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.294904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.294952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.958 [2024-11-17 11:30:30.295104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.958 [2024-11-17 11:30:30.295153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.958 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.295309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.295358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.295534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.295584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.295765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.295813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.295990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.296038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.296214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.296262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.296470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.296518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.296770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.296818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.296976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.297026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.297210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.297258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.297490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.297555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.297729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.297778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.297925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.297974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.298130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.298178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.298361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.298409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.298565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.298776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.298825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.298960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.299009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.299196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.299246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.299436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.299495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.299707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.299757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.299948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.299996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.300180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.300229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.300422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.300470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.300642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.300691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.300855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.300904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.301097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.301144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.301329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.301377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.301551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.301608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.301754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.301821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.302038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.302103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.302291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.302338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.302495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.302560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.302724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.302773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.302963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.959 [2024-11-17 11:30:30.303012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.959 qpair failed and we were unable to recover it. 00:36:05.959 [2024-11-17 11:30:30.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.303215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.303377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.303426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.303585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.303635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.303834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.303882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.304049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.304098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.304325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.304471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.304518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.304728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.304777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.304934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.304982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.305167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.305216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.305367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.305416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.305597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.305647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.305819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.305886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.306120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.306168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.306391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.306438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.306673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.306740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.306964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.307012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.307188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.307236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.307425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.307473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.307674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.307723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.307946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.307994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.308184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.308233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.308407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.308455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.308688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.308738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.308943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.309000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.309150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.309198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.309420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.309469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.309675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.309724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.309914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.309963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.310092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.310140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.310304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.310353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.310507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.310582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.310738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.310794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.310982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.311033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.311182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.311230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.311425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.311474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.311685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.311734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.311919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.311968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.312133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.312184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.312368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.312417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.960 [2024-11-17 11:30:30.312559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.960 [2024-11-17 11:30:30.312609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.960 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.312796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.312843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.313019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.313067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.313290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.313338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.313486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.313545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.313716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.313765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.313932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.313982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.314221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.314269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.314452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.314500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.314692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.314742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.314950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.315022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.315193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.315243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.315433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.315481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.315646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.315695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.315919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.315987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.316124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.316171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.316365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.316413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.316675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.316744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.316911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.316960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.317143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.317191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.317345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.317400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.317664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.317733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.318011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.318078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.318230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.318278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.318448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.318504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.318680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.318728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.318921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.318971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.319167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.319216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.319365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.319416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.319640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.319690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.319837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.319885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.320075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.320124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.320263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.320313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.320503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.320570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.320756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.320806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.320972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.321022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.321214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.321262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.321487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.321551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.321760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.321809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.321995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.322043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.322230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.322278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.961 [2024-11-17 11:30:30.322413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.961 [2024-11-17 11:30:30.322460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.961 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.322667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.322718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.322913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.322962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.323124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.323173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.323374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.323422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.323647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.323713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.323934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.323999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.324164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.324213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.324368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.324417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.324663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.324730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.324935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.325001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.325154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.325203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.325353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.325401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.325624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.325673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.325837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.325886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.326084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.326132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.326287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.326336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.326483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.326569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.326771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.326818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.327009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.327058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.327221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.327270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.327458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.327507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.327712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.327761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.327905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.327961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.328144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.328192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.328388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.328435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.328616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.328665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.328822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.328870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.329025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.329073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.329255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.329303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.329486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.329549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.329694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.329747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.329960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.330011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.330208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.330258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.330428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.330489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.330696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.330745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.330928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.330997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.331230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.331289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.331456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.331506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.331699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.331749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.331918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.962 [2024-11-17 11:30:30.331967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.962 qpair failed and we were unable to recover it. 00:36:05.962 [2024-11-17 11:30:30.332153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.332203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.332397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.332445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.332603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.332652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.332804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.332854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.333055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.333104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.333267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.333315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.333563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.333715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.333763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.333964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.334012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.334178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.334227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.334428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.334658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.334708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.334902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.334950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.335175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.335223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.335380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.335427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.335586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.335636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.335783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.335833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.335990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.336037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.336217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.336265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.336488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.336549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.336727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.336775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.336938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.336987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.337147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.337204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.337440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.337488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.337658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.337707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.337873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.337924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.338094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.338143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.338310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.338357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.338552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.338603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.338801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.338850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.339008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.339057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.339225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.339274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.339422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.339469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.339686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.339736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.339968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.340016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.963 [2024-11-17 11:30:30.340257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.963 [2024-11-17 11:30:30.340305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.963 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.340551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.340601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.340774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.340845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.341074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.341140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.341310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.341359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.341550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.341617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.341722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.341755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.341889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.341924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.342061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.342093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.342234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.342268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.342408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.342441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.342586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.342619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.342729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.342761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.342907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.342939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.343911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.343989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.344941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.344990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.345176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.345228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.345389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.345437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.345616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.345650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.345785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.345817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.345980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.346029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.346181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.346231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.346413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.346461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.346663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.346697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.346884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.346933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.347135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.347184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.347390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.347440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.347642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.964 [2024-11-17 11:30:30.347685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.964 qpair failed and we were unable to recover it. 00:36:05.964 [2024-11-17 11:30:30.347778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.347811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.347919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.347946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.348042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.348068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.348159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.348208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.348402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.348450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.348634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.348666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.348812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.348844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.349041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.349089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.349280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.349330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.349419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.349445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.349539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.349589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.349726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.349758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.349938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.349987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.350171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.350219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.350376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.350424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.350634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.350667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.350845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.350872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.350994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.351020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.351164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.351214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.351441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.351491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.351673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.351706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.351838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.351870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.352080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.352129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.352327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.352375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.352581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.352615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.352746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.352779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.352975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.353030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.353189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.353238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.353394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.353443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.353620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.353667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.353797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.353823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.353936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.353985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.354137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.354187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.354423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.354472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.354637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.354682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.354791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.354816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.354919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.354968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.355155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.355203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.355431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.355479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.355627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.355660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.965 [2024-11-17 11:30:30.355803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.965 [2024-11-17 11:30:30.355835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.965 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.355975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.356007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.356160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.356210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.356416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.356443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.356558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.356718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.356763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.356886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.356911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.357054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.357197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.357223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.357343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.357392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.357573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.357616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.357736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.357778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.357902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.357927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.358126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.358174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.358371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.358419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.358580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.358614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.358755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.358787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.358927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.358977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.359183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.359209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.359400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.359448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.359643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.359676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.359848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.360025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.360073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.360500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.360578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.360687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.360720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.360853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.360902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.361112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.361161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.361302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.361351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.361516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.361589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.361706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.361752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.361841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.361866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.361978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.362893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.362920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.966 qpair failed and we were unable to recover it. 00:36:05.966 [2024-11-17 11:30:30.363008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.966 [2024-11-17 11:30:30.363034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.363119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.363145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.363229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.363255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.363359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.363399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.363589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.363642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.363838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.363886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.364832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.364973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eec970 is same with the state(6) to be set 00:36:05.967 [2024-11-17 11:30:30.365223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.365293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.365477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.365548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.365758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.365809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.365953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.366002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.366160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.366208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.366409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.366636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.366702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.366938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.367871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.367991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.368018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.368124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.368150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.368240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.368270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.368380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.368619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.967 [2024-11-17 11:30:30.368667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.967 qpair failed and we were unable to recover it. 00:36:05.967 [2024-11-17 11:30:30.368887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.368951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.369162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.369224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.369362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.369407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.369586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.369633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.369813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.369859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.370042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.370095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.370243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.370288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.370465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.370510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.370672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.370719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.370865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.370912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.371084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.371134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.371253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.371388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.371414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.371535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.371562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.371764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.371790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.371886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.371912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.372027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.372052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.372164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.372190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.372330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.372376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.372578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.372626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.372810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.372856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.373035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.373080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.373224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.373269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.373451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.373496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.373685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.373711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.373799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.373825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.373901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.373927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.374021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.374046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.374161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.374186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.374315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.374360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.374556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.374615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.374758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.374806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.374992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.375038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.375175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.375220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.375359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.375405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.375597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.375646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.375772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.375817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.375994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.376041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.376236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.376281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.376415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.968 [2024-11-17 11:30:30.376462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.968 qpair failed and we were unable to recover it. 00:36:05.968 [2024-11-17 11:30:30.376636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.376688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.376878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.376926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.377152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.377198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.377351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.377397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.377574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.377621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.377790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.377825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.377932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.377958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.378118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.378167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.378371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.378416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.378606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.378633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.378715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.378742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.378893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.378939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.379210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.379269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.379456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.379506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.379704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.379751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.379922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.379968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.380159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.380208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.380395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.380459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.380689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.380735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.380959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.381222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.381340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.381474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.381626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.381768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.381873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.381898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.382072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.382135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.382364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.382434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.382622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.382671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.382818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.382863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.383045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.383107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.383316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.383364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.383501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.383598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.383781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.383806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.383948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.383972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.384057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.384081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.384193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.384218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.384298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.384324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.969 qpair failed and we were unable to recover it. 00:36:05.969 [2024-11-17 11:30:30.384410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.969 [2024-11-17 11:30:30.384436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.384581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.384607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.384684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.384709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.384816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.384841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.384946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.384970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.385096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.385121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.385208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.385269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.385428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.385475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.385698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.385737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.385862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.385890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.386859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.386927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.387197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.387246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.387377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.387423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.387570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.387618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.387756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.387802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.387983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.388037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.388225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.388270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.388412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.388460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.388621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.388668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.388823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.388887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.389146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.389194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.389424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.389483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.389670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.389718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.389911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.389958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.390169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.390216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.390386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.390431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.390620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.390690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.390883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.390931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.391959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.391984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.970 qpair failed and we were unable to recover it. 00:36:05.970 [2024-11-17 11:30:30.392124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.970 [2024-11-17 11:30:30.392150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.392264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.392300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.392385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.392411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.392503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.392549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.392677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.392704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.392787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.392812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.392952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.392996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.393213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.393283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.393477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.393505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.393602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.393630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.393736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.393762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.393854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.393890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.393978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.394004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.394121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.394148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.394276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.394326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.394560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.394607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.394776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.394822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.394960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.395007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.395239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.395285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.395518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.395589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.395672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.395703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.395803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.395829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.395943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.395970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.396111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.396137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.396337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.396383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.396599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.396646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.396825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.396871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.397084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.397147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.397302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.397352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.397493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.397567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.397761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.397813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.397955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.398203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.398309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.398447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.398612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.398735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.971 qpair failed and we were unable to recover it. 00:36:05.971 [2024-11-17 11:30:30.398844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.971 [2024-11-17 11:30:30.398869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.399048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.399094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.399307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.399352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.399539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.399589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.399760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.399806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.399942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.399987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.400167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.400212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.400350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.400395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.400579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.400624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.400782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.400828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.400975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.401022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.401212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.401256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.401485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.401551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.401694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.401959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.402003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.402177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.402221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.402421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.402717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.402763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.402978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.403227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.403367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.403518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.403670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.403811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.403918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.403970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.404152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.404196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.404386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.404432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.404603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.404629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.404742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.404768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.404877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.404903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.404995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.405020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.405166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.405191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.405341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.405388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.405540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.405589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.405779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.405825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.405958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.972 [2024-11-17 11:30:30.406003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.972 qpair failed and we were unable to recover it. 00:36:05.972 [2024-11-17 11:30:30.406149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.406195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.406398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.406445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.406645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.406691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.406835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.406881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.407050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.407095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.407257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.407302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.407482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.407541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.407686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.407731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.407902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.407946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.408104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.408148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.408295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.408340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.408497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.408560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.408696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.408740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.408825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.408851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.408958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.409003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.409214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.409261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.409436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.409482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.409691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.409739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.409903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.409971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.410152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.410198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.410382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.410427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.410566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.410612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.410795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.410859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.411859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.411904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.412087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.412132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.412359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.412404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.412558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.412605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.412771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.412848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.413071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.413142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.413323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.413368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.413520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.413578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.413752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.413798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.413941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.973 [2024-11-17 11:30:30.413986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.973 qpair failed and we were unable to recover it. 00:36:05.973 [2024-11-17 11:30:30.414171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.414218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.414447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.414493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.414675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.414720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.414894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.414940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.415157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.415203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.415350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.415397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.415555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.415602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.415866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.416062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.416124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.416349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.416396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.416562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.416610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.416758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.416803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.416971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.417034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.417212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.417258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.417395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.417440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.417666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.417697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.417811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.417836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.417925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.417986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.418132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.418180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.418364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.418411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.418560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.418604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.418717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.418743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.418850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.418897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.419055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.419100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.419280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.419322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.419429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.419455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.419548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.419574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.419720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.419746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.419943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.419989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.420169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.420307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.420412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.420535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.420642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.420787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.420982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.421027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.421172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.421217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.421366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.421411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.421562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.421609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.421800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.974 [2024-11-17 11:30:30.421845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.974 qpair failed and we were unable to recover it. 00:36:05.974 [2024-11-17 11:30:30.422030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.422075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.422209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.422254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.422458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.422505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.422702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.422748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.422895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.422940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.423118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.423163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.423303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.423348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.423562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.423610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.423754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.423800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.423977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.424024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.424203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.424249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.424384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.424429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.424591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.424638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.424837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.424864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.424978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.425004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.425158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.425211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.425390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.425436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.425630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.425694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.425928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.425992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.426175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.426234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.426406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.426452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.426640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.426704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.426901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.426964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.427142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.427187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.427406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.427451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.427616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.427680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.427911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.428131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.428176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.428324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.428370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.428546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.428593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.428781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.428826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.428975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.429020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.429156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.429201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.429409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.429454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.429645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.429692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.429840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.429885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.430065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.430110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.430286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.430331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.430507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.430569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.430735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.430782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.430994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.975 [2024-11-17 11:30:30.431039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.975 qpair failed and we were unable to recover it. 00:36:05.975 [2024-11-17 11:30:30.431217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.431263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.431479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.431559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.431708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.431734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.431823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.431850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.431966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.431992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.432090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.432115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.432210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.432236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.432417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.432462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.432608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.432654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.432798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.432844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.432980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.433026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.433172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.433218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.433394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.433439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.433627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.433675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.433843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.433915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.434094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.434139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.434295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.434340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.434472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.434517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.434711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.434757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.434886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.434932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.435152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.435197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.435371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.435416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.435642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.435670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.435755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.435781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.435873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.435899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.435994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.436020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.436102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.436145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.436322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.436369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.436461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.436487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.436595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.436654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.436797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.436843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.436984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.437029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.437212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.437237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.437357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.437383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.437522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.437583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.437790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.976 [2024-11-17 11:30:30.437836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.976 qpair failed and we were unable to recover it. 00:36:05.976 [2024-11-17 11:30:30.438017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.438062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.438197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.438242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.438427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.438483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.438710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.438774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.438913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.438977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.439170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.439217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.439444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.439489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.439715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.439793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.440031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.440086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.440330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.440384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.440585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.440637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.440831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.440857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.440938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.440964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.441075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.441101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.441218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.441457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.441506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.441706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.441756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.441978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.442028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.442193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.442269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.442521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.442577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.442790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.442836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.443080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.443129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.443375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.443425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.443632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.443679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.443837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.443884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.444086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.444135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.444362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.444411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.444559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.444623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.444768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.444821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.445022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.445069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.445241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.445287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.445545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.445611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.445844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.446112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.446161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.446386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.446435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.446607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.446655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.446824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.446871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.447022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.447070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.447288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.447334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.447532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.447599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.447749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.977 [2024-11-17 11:30:30.447794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.977 qpair failed and we were unable to recover it. 00:36:05.977 [2024-11-17 11:30:30.447987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.448041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.448254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.448306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.448533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.448580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.448771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.448797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.448927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.448954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.449153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.449200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.449426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.449480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.449676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.449724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.449985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.450011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.450103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.450129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.450242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.450309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.450550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.450604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.450844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.450897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.451054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.451109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.451372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.451425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.451603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.451656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.451894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.451949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.452193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.452254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.452406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.452455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.452566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.452593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.452741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.452767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.452890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.452943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.453123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.453174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.453422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.453647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.453700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.453913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.453965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.454186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.454238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.454454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.454508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.454700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.454754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.454957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.455011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.455197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.455456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.455509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.455730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.455782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.455988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.456040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.456204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.456256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.456416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.456469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.456684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.456737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.457026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.457190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.457243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.978 [2024-11-17 11:30:30.457484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.978 [2024-11-17 11:30:30.457554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.978 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.457801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.457854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.458064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.458117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.458290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.458343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.458587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.458641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.458852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.458905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.459075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.459127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.459320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.459373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.459559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.459613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.459818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.459871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.460082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.460108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.460221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.460248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.460359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.460411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.460654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.460708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.460896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.460949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.461137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.461193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.461433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.461498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.461744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.461798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.461957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.462018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.462265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.462317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.462481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.462575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.462782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.462835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.463073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.463126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.463271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.463328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.463512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.463580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.463725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.463798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.464064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.464281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.464338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.464613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.464640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.464730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.464756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.464938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.464998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.465220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.465277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.465490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.465518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.465657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.465684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.465771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.465798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.465967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.466024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.466240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.466296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.466481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.466552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.466763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.466820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.467070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.467126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.467348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.467385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.979 qpair failed and we were unable to recover it. 00:36:05.979 [2024-11-17 11:30:30.467475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.979 [2024-11-17 11:30:30.467502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.467591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.467617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.467736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.467762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.467882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.467908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.468133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.468188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.468389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.468415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.468720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.468778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.468993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.469049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.469256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.469312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.469549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.469609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.469862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.469918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.470103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.470159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.470404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.470460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.470702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.470759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.470955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.471011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.471215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.471272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.471490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.471558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.471791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.471857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.472084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.472141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.472393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.472450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.472723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.472789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.473007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.473064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.473291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.473317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.473454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.473481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.473609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.473636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.473780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.473807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.473963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.474020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.474229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.474285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.474506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.474573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.474752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.474810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.475080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.475136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.475411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.475437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.475561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.475649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.475675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.475784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.475811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.475932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.980 qpair failed and we were unable to recover it. 00:36:05.980 [2024-11-17 11:30:30.476058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.980 [2024-11-17 11:30:30.476115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.476342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.476400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.476661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.476719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.476928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.476986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.477198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.477256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.477509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.477579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.477763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.477821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.478026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.478082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.478318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.478376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.478601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.478658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.478868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.478925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.479098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.479155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.479406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.479462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.479640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.479698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.479922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.479980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.480202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.480258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.480481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.480507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.480602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.480629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.480793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.480849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.481021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.481336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.481393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.481663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.481729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.481896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.481952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.482916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.482942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.483058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.483086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.483347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.483403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.483660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.483718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.483949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.483975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.484058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.484130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.484375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.484431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.484637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.484694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.484921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.484978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.485155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.485211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.485433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.485488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.981 [2024-11-17 11:30:30.485703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.981 [2024-11-17 11:30:30.485760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.981 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.485983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.486039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.486245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.486301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.486597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.486655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.486819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.486875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.487089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.487144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.487317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.487374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.487602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.487660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.487852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.487909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.488141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.488203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.488436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.488497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.488696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.488757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.488984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.489045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.489287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.489344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.489610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.489668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.489911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.489979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.490174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.490232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.490414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.490471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.490691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.490749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.491005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.491061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.491359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.491582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.491856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.491916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.492153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.492214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.492495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.492583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.492755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.492811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.492989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.493044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.493253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.493309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.493606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.493668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.493894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.493955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.494188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.494249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.494441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.494501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.494827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.494888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.495176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.495237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.495454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.495514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.495795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.495857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.496094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.496154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.496395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.496458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.496709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.496770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.497001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.497063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.497258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.497319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.982 [2024-11-17 11:30:30.497550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.982 [2024-11-17 11:30:30.497613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.982 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.497854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.497915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.498095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.498155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.498395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.498455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.498739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.498800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.499043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.499103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.499294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.499355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.499651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.499714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.499994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.500053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.500295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.500356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.500582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.500645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.500922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.500983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.501225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.501294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.501504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.501585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.501796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.501858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.502092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.502155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.502415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.502480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.502738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.502798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.503064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.503125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.503439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.503504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.503798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.503868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.504079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.504140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.504364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.504432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.504678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.504739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.505013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.505074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.505341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.505400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.505659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.505726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.506006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.506072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.506356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.506421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.506749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.506815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.507098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.507164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.507425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.507489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.507727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.507793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.508092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.508158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.508442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.508507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.508811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.508877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.509188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.509254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.509597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.509659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.509929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.509989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.510217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.510277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.983 [2024-11-17 11:30:30.510580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.983 [2024-11-17 11:30:30.510646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.983 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.510934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.511000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.511260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.511326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.511627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.511693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.511950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.512015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.512306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.512373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.512625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.512695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.512922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.512988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.513286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.513352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.513569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.513637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.513929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.513994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.514202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.514268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.514569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.514636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.514923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.514989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.515230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.515296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.515604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.515671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.515961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.516027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.516382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.516617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.516683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.516956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.517022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.517315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.517390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.517647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.517713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.517970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.518036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.518285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.518351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.518648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.518715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.518973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.519040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.519281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.519348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.519606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.519674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.519938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.520004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.520216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.520284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.520489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.520569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.520829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.520896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.521152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.521218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.521481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.521559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.521783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.521849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.522033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.522100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.522320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.522387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.522613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.522680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.522923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.522990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.523279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.523344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.523636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.984 [2024-11-17 11:30:30.523703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.984 qpair failed and we were unable to recover it. 00:36:05.984 [2024-11-17 11:30:30.524000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.524065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.524354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.524419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.524715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.524782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.524983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.525049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.525314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.525380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.525605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.525672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.525906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.525973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.526220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.526286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.526635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.526987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.527053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.527306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.527370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.527589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.527656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.527918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.528258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.528322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.528558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.528625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.528872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.528937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.529187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.529253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.529473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.529567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.529826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.529894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.530152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.530227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.530474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.530555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.530805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.530871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.531112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.531177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.531473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.531553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.531855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.531921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.532216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.532279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.532581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.532648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.532894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.532960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.533250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.533315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.533609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.533677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.533980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.534046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.534303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.534367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.534616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.534683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.534945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.535012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.535259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.535323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.985 [2024-11-17 11:30:30.535613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.985 [2024-11-17 11:30:30.535679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.985 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.535888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.535954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.536207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.536272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.536543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.536610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.536836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.536901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.537157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.537224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.537424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.537489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.537725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.537792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.538079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.538145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.538431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.538497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.538807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.538872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.539180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.539246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.539510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.539591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.539845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.539910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.540209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.540274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.540503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.540606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.540859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.540927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.541221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.541286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.541555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.541622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.541839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.541905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.542176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.542452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.542517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.542831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.542897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.543256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.543515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.543593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.543868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.543934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.544223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.544289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.544581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.544647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.544861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.544928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.545190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.545256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.545561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.545629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.545924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.545990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.546289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.546355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.546653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.546719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.546969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.547035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.547278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.547345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.547652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.547718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.547922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.547987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.548200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.548266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.548507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.548588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.986 [2024-11-17 11:30:30.548846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.986 [2024-11-17 11:30:30.548914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.986 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.549181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.549246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.549500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.549581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.549813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.549879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.550106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.550170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.550430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.550495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.550804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.550870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.551118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.551184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.551444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.551510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.551813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.551879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.552191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.552481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.552590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.552815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.552881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.553171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.553236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.553492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.553575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.553865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.553930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.554178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.554242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.554490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.554573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.554833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.554899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.555186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.555252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.555509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.555596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.555888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.555954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.556213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.556278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.556578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.556645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.556893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.556960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.557238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.557304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.557578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.557646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.557894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.557962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.558224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.558290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.558581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.558648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.558944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.559009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.559300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.559364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.559660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.559728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.560021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.560089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.560290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.560357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.560604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.560672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.561290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.561354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.561664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.561732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.562012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.562078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.562292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.987 [2024-11-17 11:30:30.562357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.987 qpair failed and we were unable to recover it. 00:36:05.987 [2024-11-17 11:30:30.562614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.562681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.562971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.563036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.563396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.563680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.563747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.563949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.564017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.564305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.564371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.564629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.564696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.564898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.564964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.565168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.565233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.565499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.565584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.565885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.565961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.566214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.566282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.566578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.566645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.566896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.566964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.567216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.567284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.567517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.567598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.567815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.567881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.568134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.568199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.568431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.568497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.568813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.568878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.569098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.569164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.569412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.569481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.569747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.569813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.570053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.570119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.570374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.570441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.570702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.570770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.571060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.571126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.571328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.571395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.571653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.571722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.571981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.572046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.572249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.572315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.572613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.572679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.572967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.573033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.573290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.573356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.573648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.573715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.574008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.574074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.574372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.574437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.574714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.574780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.575048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.575114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.575340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.575405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.575657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.988 [2024-11-17 11:30:30.575724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.988 qpair failed and we were unable to recover it. 00:36:05.988 [2024-11-17 11:30:30.576017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.576083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.576335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.576400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.576703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.576770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.577022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.577087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.577389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.577453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.577759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.577826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.578054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.578119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.578400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.578465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.578830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.579088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.579164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.579452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.579517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.579804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.579870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.580133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.580198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.580455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.580522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.580844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.580910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.581168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.581234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.581542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.581610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.581867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.581932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.582185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.582250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.582510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.582591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.582844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.582909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.583139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.583205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.583432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.583510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.583792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.583859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.584117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.584183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.584430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.584495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.584742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.584810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.585109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.585175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.585447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.585512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.585833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.585900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.586159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.586226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.586428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.586494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.586803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.586870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.587167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.587233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.989 qpair failed and we were unable to recover it. 00:36:05.989 [2024-11-17 11:30:30.587511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.989 [2024-11-17 11:30:30.587589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-11-17 11:30:30.587836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-11-17 11:30:30.587902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-11-17 11:30:30.588160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-11-17 11:30:30.588228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-11-17 11:30:30.588479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-11-17 11:30:30.588563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-11-17 11:30:30.588778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-11-17 11:30:30.588847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-11-17 11:30:30.589084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-11-17 11:30:30.589150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:05.990 [2024-11-17 11:30:30.589417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:05.990 [2024-11-17 11:30:30.589481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:05.990 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.589722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.589790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.590047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.590114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.590366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.590431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.590675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.590742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.590946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.591021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.591245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.591319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.591578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.591646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.591897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.591970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.592190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.592266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.592553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.592627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.592842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.592909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.593163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.593230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.593551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.593619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.593884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.593955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.594155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.594222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.594481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.594566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.594899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.594966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.272 [2024-11-17 11:30:30.595232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.272 [2024-11-17 11:30:30.595303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.272 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.595552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.595620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.595842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.595908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.596164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.596229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.596513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.596592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.596900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.596966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.597262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.597327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.597590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.597657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.597948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.598014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.598259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.598324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.598516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.598597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.598912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.598978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.599274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.599339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.599638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.599705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.599953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.600021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.600317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.600383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.600593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.600660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.600913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.600979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.601245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.601311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.601562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.601629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.601937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.602003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.602204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.602269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.602551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.602618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.602874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.602940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.603182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.603246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.603553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.603620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.603896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.603964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.604242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.604307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.604601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.604668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.604914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.604980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.605270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.605335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.605642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.605725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.605985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.606051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.606318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.606382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.273 qpair failed and we were unable to recover it. 00:36:06.273 [2024-11-17 11:30:30.606689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.273 [2024-11-17 11:30:30.606756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.607054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.607120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.607369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.607434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.607716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.607783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.608078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.608143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.608452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.608516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.608826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.608891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.609157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.609220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.609434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.609499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.609821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.609886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.610210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.610275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.610483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.610759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.610824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.611065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.611130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.611379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.611445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.611767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.611832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.612101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.612167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.612417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.612486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.612742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.612808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.613098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.613164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.613465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.613546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.613809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.613874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.614085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.614152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.614407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.614472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.614799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.614865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.615127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.615192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.615432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.615497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.615768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.615836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.616018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.616085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.616335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.616400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.616653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.616720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.616984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.274 [2024-11-17 11:30:30.617050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.274 qpair failed and we were unable to recover it. 00:36:06.274 [2024-11-17 11:30:30.617258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.617324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.617577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.617643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.617937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.618003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.618303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.618368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.618630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.618697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.618968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.619044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.619308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.619373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.619582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.619652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.619861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.619928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.620229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.620295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.620555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.620622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.620905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.620970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.621271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.621584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.621652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.621915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.621980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.622165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.622230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.622550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.622616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.622865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.622933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.623197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.623263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.623543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.623610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.623909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.623975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.624233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.624301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.624560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.624628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.624921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.624987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.625228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.625294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.625592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.625658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.625908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.625974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.626231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.626593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.626661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.626952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.627017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.627263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.627329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.627582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.627648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.627959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.628025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.628277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.628343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.628631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.628698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.275 [2024-11-17 11:30:30.628999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.275 [2024-11-17 11:30:30.629064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.275 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.629323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.629388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.629599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.629666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.629977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.630042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.630299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.630364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.630590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.630659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.630978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.631278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.631343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.631590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.631660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.631944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.632010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.632268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.632343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.632643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.632710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.632964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.633030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.633321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.633386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.633655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.633722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.633974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.634039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.634294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.634358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.634598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.634665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.634861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.634927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.635130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.635195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.635483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.635573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.635838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.635903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.636197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.636262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.636515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.636598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.636872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.636937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.637224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.637290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.637588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.637656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.637906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.637971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.638229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.638298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.638575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.638645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.638892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.638960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.639190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.639256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.639553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.639620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.639875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.639941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.640230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.640296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.640552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.640619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.640884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.640952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.276 qpair failed and we were unable to recover it. 00:36:06.276 [2024-11-17 11:30:30.641189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.276 [2024-11-17 11:30:30.641256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.641481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.641558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.641762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.641831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.642078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.642144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.642351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.642417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.642711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.642779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.643087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.643153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.643384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.643448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.643719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.643786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.644111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.644177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.644475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.644556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.644859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.644925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.645146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.645212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.645437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.645514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.645795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.645861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.646121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.646188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.646474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.646568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.646877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.646953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.647242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.647307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.647573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.647641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.647870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.648191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.648257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.648556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.648625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.648876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.648942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.649245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.649310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.649619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.649686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.649935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.650000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.650299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.650364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.650658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.650727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.650972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.651037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.651279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.651346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.277 [2024-11-17 11:30:30.651603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.277 [2024-11-17 11:30:30.651671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.277 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.651975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.652040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.652334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.652400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.652689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.652757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.653041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.653107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.653355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.653423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.653681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.653749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.654013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.654079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.654275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.654342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.654585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.654655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.654963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.655030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.655274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.655341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.655576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.655644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.655894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.655960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.656213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.656546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.656614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.656906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.656971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.657215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.657280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.657472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.657557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.657776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.657841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.658090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.658154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.658350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.658415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.658677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.658755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.659001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.659066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.659287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.659353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.659641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.659708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.660019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.660084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.660344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.660409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.660667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.660734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.660981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.661047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.661293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.661359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.661603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.661669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.661976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.662042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.662298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.662364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.662615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.662681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.662881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.662949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.663256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.663321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.663617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.663685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.278 [2024-11-17 11:30:30.663927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.278 [2024-11-17 11:30:30.663994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.278 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.664234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.664300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.664562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.664629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.664922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.664988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.665195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.665260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.665574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.665640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.665887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.665952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.666174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.666239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.666442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.666507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.666818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.666884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.667083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.667150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.667392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.667459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.667704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.667771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.667965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.668031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.668280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.668345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.668555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.668622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.668917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.668983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.669248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.669314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.669521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.669602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.669866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.669932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.670237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.670304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.670572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.670639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.670932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.670999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.671292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.671357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.671618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.671703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.671960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.672026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.672243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.672307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.672564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.672631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.672851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.672918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.673174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.673242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.673460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.673539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.673732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.673797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.674030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.674095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.674275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.674342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.674563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.674630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.674877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.674943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.675206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.675272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.675518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.675600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.279 qpair failed and we were unable to recover it. 00:36:06.279 [2024-11-17 11:30:30.675829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.279 [2024-11-17 11:30:30.675896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.676140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.676207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.676449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.676514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.676740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.676806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.677070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.677136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.677443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.677508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.677788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.677853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.678094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.678161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.678406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.678472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.678750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.678817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.679076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.679142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.679347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.679412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.679696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.679764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.680024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.680092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.680347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.680412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.680685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.680754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.680968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.681034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.681244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.681308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.681611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.681677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.681900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.681966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.682231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.682295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.682507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.682588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.682825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.682891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.683080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.683147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.683362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.683428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.683681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.683748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.683948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.684024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.684322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.684387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.684598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.684666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.684882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.684950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.685194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.685260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.685482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.685565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.685798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.685864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.686162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.686229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.686506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.686585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.686787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.686853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.687084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.687151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.280 [2024-11-17 11:30:30.687395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.280 [2024-11-17 11:30:30.687462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.280 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.687715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.687782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.688067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.688132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.688403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.688469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.688709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.688775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.689015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.689079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.689324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.689389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.689685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.689751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.689954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.690019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.690248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.690313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.690515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.690597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.690841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.690906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.691216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.691462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.691548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.691810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.691874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.692113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.692179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.692400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.692469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.692756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.692823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.693046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.693111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.693401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.693467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.693808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.693908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.694257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.694326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.694588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.694659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.694963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.695030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.695261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.695326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.695564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.695632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.695888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.695954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.696188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.696253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.696492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.696577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.696833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.696910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.697241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.697464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.697543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.697783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.698094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.698159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.698369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.698434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.698696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.698762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.699007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.699073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.699300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.699366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.281 qpair failed and we were unable to recover it. 00:36:06.281 [2024-11-17 11:30:30.699667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.281 [2024-11-17 11:30:30.699734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.699989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.700056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.700299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.700363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.700610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.700677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.700924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.700990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.701199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.701264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.701491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.701573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.701789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.701859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.702100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.702166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.702410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.702475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.702811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.702880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.703144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.703212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.703464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.703546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.703764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.703828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.704035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.704100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.704399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.704464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.704743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.704810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.705066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.705131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.705466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.705581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.705847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.705917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.706178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.706246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.706555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.706632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.706938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.707007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.707231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.707297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.707545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.707612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.707812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.707879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.708178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.708245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.708545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.708623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.708992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.709295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.709384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.709614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.709681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.709975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.710041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.710304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.282 [2024-11-17 11:30:30.710378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.282 qpair failed and we were unable to recover it. 00:36:06.282 [2024-11-17 11:30:30.710626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.710695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.710990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.711056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.711298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.711363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.711634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.711736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.711988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.712055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.712317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.712382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.712613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.712681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.712984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.713056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.713305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.713373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.713684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.713752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.713997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.714062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.714309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.714374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.714650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.714718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.715013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.715080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.715358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.715426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.715688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.715755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.715978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.716046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.716314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.716384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.716663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.716732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.716993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.717060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.717364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.717436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.717749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.717816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.718122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.718190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.718444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.718512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.718792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.718857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.719123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.719236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.719586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.719657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.719883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.719948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.720218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.720283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.720551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.720619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.720872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.720941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.721254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.721322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.721587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.721655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.721923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.721991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.722219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.722287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.722600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.722670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.722874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.722940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.283 [2024-11-17 11:30:30.723145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.283 [2024-11-17 11:30:30.723213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.283 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.723469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.723547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.723822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.723890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.724202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.724271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.724519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.724618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.724914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.724980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.725296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.725363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.725680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.725748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.726039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.726105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.726407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.726478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.726795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.726864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.727130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.727197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.727426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.727492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.727741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.727808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.728075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.728145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.728408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.728477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.728798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.728864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.729098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.729165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.729468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.729591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.729862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.729928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.730165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.730233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.730552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.730626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.730920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.730990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.731242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.731310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.731568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.731639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.731858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.731926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.732200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.732267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.732550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.732648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.732889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.732970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.733260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.733325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.733569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.733852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.733917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.734219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.734283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.734537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.734602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.734846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.734911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.735123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.735187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.735425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.735488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.735708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.284 [2024-11-17 11:30:30.735774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.284 qpair failed and we were unable to recover it. 00:36:06.284 [2024-11-17 11:30:30.736031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.736094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.736327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.736393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.736645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.736711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.736933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.736997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.737305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.737369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.737659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.737727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.737983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.738047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.738302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.738369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.738615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.738682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.738947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.739011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.739251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.739316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.739558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.739624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.739840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.739904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.740101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.740164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.740401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.740465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.740683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.740747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.740948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.741012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.741268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.741343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.741622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.741688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.741930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.741995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.742244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.742307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.742507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.742585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.742832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.742896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.743088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.743152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.743384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.743448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.743661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.743727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.743910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.743974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.744256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.744320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.744570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.744636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.744880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.744943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.745196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.745260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.745513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.745603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.745793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.745858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.746143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.746207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.746405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.746469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.746808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.746874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.747128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.747191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.747449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.285 [2024-11-17 11:30:30.747513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.285 qpair failed and we were unable to recover it. 00:36:06.285 [2024-11-17 11:30:30.747742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.747807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.748033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.748096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.748335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.748399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.748654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.748720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.748984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.749049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.749270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.749338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.749612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.749688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.749939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.750005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.750236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.750300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.750603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.750668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.750925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.750989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.751279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.751343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.751552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.751619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.751906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.751970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.752206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.752270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.752510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.752593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.752820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.752884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.753141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.753206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.753445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.753508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.753753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.753817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.754067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.754171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.754547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.754645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.754926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.754995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.755175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.755240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.755499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.755580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.755825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.755891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.756088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.756152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.756434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.756499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.756726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.756791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.757038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.757103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.757391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.757455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.757697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.757767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.758070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.758135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.286 [2024-11-17 11:30:30.758358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.286 [2024-11-17 11:30:30.758436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.286 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.758703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.758770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.759011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.759074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.759326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.759392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.759607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.759673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.759924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.759991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.760209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.760274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.760520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.760600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.760829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.760893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.761194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.761258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.761520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.761597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.761844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.761907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.762091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.762157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.762399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.762462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.762745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.762813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.763069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.763134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.763375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.763438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.763663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.763729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.764022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.764087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.764377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.764441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.764652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.764719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.764962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.765027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.765280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.765344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.765594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.765662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.765916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.765981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.766234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.766305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.766559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.766626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.766847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.766911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.767156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.767219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.767487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.767565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.767810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.767875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.768079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.768144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.768429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.768493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.768754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.768819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.769063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.769128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.769336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.769402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.769641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.769706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.769954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.770020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.287 qpair failed and we were unable to recover it. 00:36:06.287 [2024-11-17 11:30:30.770248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.287 [2024-11-17 11:30:30.770314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.770613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.770678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.770910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.770984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.771234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.771300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.771582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.771647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.771853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.771917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.772153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.772218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.772446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.772509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.772720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.772785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.773022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.773087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.773296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.773360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.773605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.773671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.773952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.774018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.774258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.774322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.774578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.774643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.774843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.774908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.775165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.775230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.775470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.775545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.775831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.775894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.776149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.776213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.776458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.776537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.776798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.776862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.777107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.777171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.777351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.777416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.777684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.777750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.777959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.778025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.778311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.778376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.778598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.778664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.778873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.778940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.779159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.779225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.779449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.779515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.779837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.779904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.780202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.780266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.780561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.780628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.780882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.780947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.781201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.781265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.781513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.781594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.781834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.288 [2024-11-17 11:30:30.781898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.288 qpair failed and we were unable to recover it. 00:36:06.288 [2024-11-17 11:30:30.782147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.782211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.782468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.782552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.782768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.782835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.783121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.783186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.783478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.783592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.783849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.783913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.784124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.784189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.784479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.784564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.784819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.784884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.785167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.785231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.785445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.785512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.785828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.785892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.786111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.786175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.786419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.786483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.786763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.786827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.787070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.787136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.787392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.787457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.787773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.787838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.788089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.788154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.788371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.788436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.788759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.788824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.789110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.789174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.789434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.789499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.789728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.789793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.790078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.790142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.790392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.790456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.790785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.790850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.791155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.791219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.791475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.791561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.791792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.791856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.792078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.792144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.792384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.792536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.792571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.792676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.792710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.792879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.792912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.793010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.793043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.793207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.793240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.289 [2024-11-17 11:30:30.793361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.289 [2024-11-17 11:30:30.793394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.289 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.793510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.793557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.793698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.793732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.793831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.793864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.793980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.794185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.794316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.794452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.794637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.794793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.794963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.794997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.795156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.795221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.795421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.795486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.795696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.795730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.795843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.795876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.795974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.796118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.796265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.796407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.796553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.796700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.796885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.796918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.797029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.797063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.797198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.797278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.797538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.797600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.797714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.797748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.797882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.797915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.798027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.798061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.798199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.798233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.798372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.798405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.798514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.798564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.798708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.798742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.798888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.798921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.799096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.799128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.799326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.799376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.799509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.799561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.799718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.799754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.799874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.799911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.290 qpair failed and we were unable to recover it. 00:36:06.290 [2024-11-17 11:30:30.800048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.290 [2024-11-17 11:30:30.800087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.800197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.800232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.800354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.800390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.800543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.800714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.800753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.800867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.800902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.801048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.801084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.801217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.801287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.801586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.801623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.801730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.801769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.801878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.801912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.802055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.802088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.802217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.802250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.802392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.802425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.802550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.802586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.802724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.802758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.802870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.802905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.803045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.803078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.803219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.803252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.803382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.803416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.803555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.803590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.803739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.803772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.803950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.803983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.804133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.804166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.804333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.804367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.804478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.804512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.804652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.804685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.804857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.804890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.805002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.805037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.805154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.805188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.805323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.805358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.805469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.805503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.805659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.805694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.291 qpair failed and we were unable to recover it. 00:36:06.291 [2024-11-17 11:30:30.805801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.291 [2024-11-17 11:30:30.805837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.805982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.806015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.806120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.806154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.806293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.806332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.806464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.806497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.806659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.806702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.806834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.806869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.806966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.807001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.807131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.807171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.807348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.807382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.807533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.807597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.807733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.807767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.807874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.807909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.808019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.808054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.808166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.808201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.808336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.808370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.808471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.808505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.808693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.808727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.808871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.808905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.809037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.809101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.809365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.809429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.809655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.809689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.809799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.809833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.809994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.810058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.810293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.810358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.810582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.810616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.810785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.810819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.810988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.811022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.811160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.811193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.811363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.811397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.811573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.811608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.811751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.811785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.811923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.811957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.812090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.812123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.812270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.812303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.812417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.812451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.812595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.812631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.812741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.292 [2024-11-17 11:30:30.812775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.292 qpair failed and we were unable to recover it. 00:36:06.292 [2024-11-17 11:30:30.812928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.812962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.813068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.813102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.813252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.813286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.813396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.813429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.813538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.813573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.813681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.813721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.813869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.813914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.814025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.814059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.814201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.814237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.814379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.814413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.814511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.814568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.814745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.814781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.814886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.814924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.815064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.815098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.815246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.815285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.815389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.815423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.815550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.815586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.815723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.815757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.815895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.815931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.816073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.816245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.816407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.816576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.816728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.816869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.816973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.817131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.817279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.817418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.817563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.817706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.817879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.817913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.818057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.818090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.818206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.818240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.818391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.818424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.818535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.818569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.818707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.818741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.293 [2024-11-17 11:30:30.818879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.293 [2024-11-17 11:30:30.818913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.293 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.819024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.819058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.819167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.819204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.819313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.819351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.819455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.819491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.819648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.819683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.819829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.819864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.820009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.820043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.820170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.820211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.820355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.820390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.820561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.820617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.820738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.820774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.820898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.820931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.821067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.821101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.821203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.821238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.821383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.821417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.821557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.821592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.821709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.821876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.821909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.822010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.822045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.822191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.822232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.822373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.822407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.822535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.822570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.822709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.822743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.822883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.822918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.823026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.823060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.823211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.823252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.823376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.823410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.823522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.823584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.823734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.823772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.823910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.823945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.824099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.824134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.824248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.824283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.824425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.824459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.824570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.824604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.824754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.824788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.824934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.824968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.825105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.825139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.294 [2024-11-17 11:30:30.825258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.294 [2024-11-17 11:30:30.825295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.294 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.825413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.825447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.825570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.825611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.825722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.825756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.825906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.825941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.826053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.826088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.826197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.826234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.826369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.826403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.826510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.826553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.826694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.826728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.826838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.826877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.827060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.827240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.827386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.827540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.827709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.827878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.827988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.828137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.828335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.828490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.828647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.828778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.828911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.828942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.829058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.829092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.829203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.829235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.829352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.829385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.829485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.829519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.829652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.829686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.829813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.829868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.830085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.830140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.830344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.830399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.830632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.830667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.295 qpair failed and we were unable to recover it. 00:36:06.295 [2024-11-17 11:30:30.830783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.295 [2024-11-17 11:30:30.830816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.830965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.831129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.831276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.831436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.831617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.831759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.831900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.831934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.832078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.832112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.832253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.832287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.832385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.832419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.832562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.832596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.832731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.832764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.832885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.832918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.833090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.833222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.833372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.833536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.833721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.833876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.833990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.834133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.834316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.834448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.834663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.834804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.834943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.834978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.835116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.835150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.835292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.835326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.835462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.835496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.835673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.835803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.835841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.836027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.836062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.836177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.836212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.836376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.836412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.836534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.836569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.836761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.836797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.836955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.296 [2024-11-17 11:30:30.836995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.296 qpair failed and we were unable to recover it. 00:36:06.296 [2024-11-17 11:30:30.837114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.837150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.837301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.837336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.837509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.837555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.837683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.837721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.837844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.837880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.837988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.838022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.838164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.838198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.838364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.838398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.838517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.838561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.838665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.838700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.838838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.838872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.839013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.839048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.839159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.839193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.839328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.839361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.839501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.839555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.839678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.839713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.839846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.839880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.840014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.840047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.840154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.840187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.840317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.840361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.840476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.840511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.840639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.840675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.840848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.840886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.841010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.841063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.841243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.841279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.841394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.841430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.841552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.841588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.841731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.841764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.841877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.841911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.842087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.842247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.842388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.842534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.842691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.842836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.842975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.843010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.843111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.843145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.297 [2024-11-17 11:30:30.843286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.297 [2024-11-17 11:30:30.843320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.297 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.843451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.843486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.843626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.843660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.843830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.843864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.843963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.843997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.844165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.844199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.844348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.844384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.844522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.844567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.844679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.844713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.844860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.844894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.845012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.845046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.845160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.845193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.845359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.845392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.845511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.845558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.845676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.845709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.845818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.845852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.846025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.846058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.846161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.846194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.846340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.846376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.846491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.846537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.846686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.846720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.846862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.846896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.847045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.847085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.847257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.847291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.847396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.847430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.847546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.847580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.847729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.847763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.847855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.847887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.848937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.848970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.849144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.849289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.849420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.298 [2024-11-17 11:30:30.849454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.298 qpair failed and we were unable to recover it. 00:36:06.298 [2024-11-17 11:30:30.849587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.849622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.849722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.849756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.849855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.849889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.849994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.850250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.850300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.850469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.850522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.850723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.850774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.850941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.850993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.851224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.851276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.851511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.851577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.851790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.851841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.852050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.852102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.852253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.852303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.852486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.852554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.852767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.852818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.852989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.853042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.853146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.853179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.853280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.853314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.853460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.853493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.853643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.853678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.853842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.853893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.854132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.854386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.854437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.854650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.854684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.854801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.854839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.854946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.854981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.855128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.855187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.855392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.855443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.855702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.855755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.855959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.855992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.856133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.856168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.856369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.856421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.856628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.856680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.856865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.856918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.857077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.857127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.857334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.857386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.857622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.857674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.857878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.299 [2024-11-17 11:30:30.857929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.299 qpair failed and we were unable to recover it. 00:36:06.299 [2024-11-17 11:30:30.858088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.858141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.858295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.858347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.858571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.858607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.858745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.858778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.858940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.858991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.859162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.859213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.859385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.859436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.859622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.859675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.859881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.859914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.860012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.860046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.860209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.860261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.860499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.860566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.860749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.860801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.861058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.861092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.861207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.861241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.861350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.861384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.861562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.861615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.861789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.861841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.862055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.862107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.862321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.862374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.862588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.862641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.862847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.862901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.863100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.863152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.863319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.863370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.863573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.863626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.863830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.863863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.864001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.864040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.864195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.864246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.864409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.864461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.864688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.864741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.864922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.864975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.865218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.865251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.865363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.300 [2024-11-17 11:30:30.865396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.300 qpair failed and we were unable to recover it. 00:36:06.300 [2024-11-17 11:30:30.865542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.865596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.865772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.865825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.866033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.866084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.866339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.866567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.866621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.866829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.866880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.867028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.867079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.867269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.867321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.867537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.867589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.867756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.867789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.867929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.868176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.868228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.868401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.868452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.868678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.868730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.868923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.868973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.869153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.869204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.869429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.869595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.869630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.869798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.869849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.869998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.870048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.870233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.870311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.870580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.870636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.870854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.870910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.871152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.871205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.871397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.871455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.871733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.871787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.871978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.872030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.872288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.872343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.872564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.872617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.872789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.872842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.872999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.873050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.873292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.873344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.873569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.873623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.873830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.873894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.874046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.874100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.874344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.874396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.874632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.874685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.301 [2024-11-17 11:30:30.874900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.301 [2024-11-17 11:30:30.874953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.301 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.875151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.875204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.875398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.875450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.875706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.875759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.875971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.876022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.876218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.876272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.876506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.876575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.876733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.876785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.877027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.877081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.877242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.877294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.877603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.877869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.877925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.878104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.878161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.878380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.878631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.878688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.878907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.878962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.879195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.879251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.879397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.879453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.879726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.879783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.880009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.880065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.880248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.880305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.880560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.880618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.880841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.880898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.881161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.881216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.881442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.881498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.881734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.881790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.882009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.882066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.882319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.882375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.882550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.882607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.882872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.882929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.883144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.883199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.883424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.883621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.883680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.883859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.883915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.884167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.884223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.884418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.884473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.884745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.884817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.885071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.885148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.302 qpair failed and we were unable to recover it. 00:36:06.302 [2024-11-17 11:30:30.885420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.302 [2024-11-17 11:30:30.885477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.885708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.885764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.885934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.885989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.886165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.886224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.886500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.886606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.886826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.886883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.887065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.887120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.887286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.887344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.887555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.887614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.887818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.887875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.888125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.888181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.888426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.888483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.888766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.888822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.888999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.889054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.889301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.889366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.889595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.889653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.889829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.889886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.890135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.890192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.890446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.890501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.890734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.890959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.891015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.891232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.891287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.891464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.891537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.891770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.891831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.892102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.892162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.892402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.892463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.892757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.892818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.893058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.893119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.893391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.893451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.893713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.893781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.894058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.894123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.894396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.894464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.894756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.894818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.895048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.895111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.895409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.895475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.895748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.895809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.896089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.896149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.896453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.303 [2024-11-17 11:30:30.896518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.303 qpair failed and we were unable to recover it. 00:36:06.303 [2024-11-17 11:30:30.896788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.896861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.897136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.897197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.897495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.897596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.897802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.897861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.898109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.898169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.898445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.898505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.898794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.898854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.899079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.899138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.899411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.899470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.899722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.899785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.900030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.900092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.900283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.900345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.900594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.900656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.900868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.900928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.901136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.901197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.901424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.901484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.901743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.901804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.902074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.902134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.902435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.902517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.902764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.902826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.903081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.903143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.903397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.903462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.903731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.903792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.903978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.904039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.904231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.904292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.904612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.904866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.904926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.905173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.905233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.905416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.905479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.905721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.905785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.906027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.906087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.304 [2024-11-17 11:30:30.906276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.304 [2024-11-17 11:30:30.906339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.304 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.906571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.906633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.906904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.906965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.907190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.907250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.907435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.907497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.907748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.907810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.908052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.908112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.908363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.908428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.908649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.908711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.908906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.908977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.909160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.909221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.909393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.909480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.909760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.909819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.584 qpair failed and we were unable to recover it. 00:36:06.584 [2024-11-17 11:30:30.910091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.584 [2024-11-17 11:30:30.910152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.910378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.910446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.910760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.910823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.911052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.911112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.911389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.911449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.911742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.911809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.912076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.912140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.912339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.912399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.912638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.912700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.912939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.913014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.913252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.913308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.913546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.913605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.913827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.913883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.914102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.914157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.914423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.914480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.914804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.914861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.915057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.915115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.915298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.915370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.915644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.915705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.916003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.916111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.916352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.916416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.916642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.916701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.917011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.917072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.917372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.917440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.917806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.917875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.918148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.918213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.918501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.918613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.918925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.918993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.919310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.919376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.919599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.919662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.919997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.920065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.920337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.920427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.585 [2024-11-17 11:30:30.920696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.585 [2024-11-17 11:30:30.920760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.585 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.921022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.921087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.921361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.921442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.921773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.921865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.922149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.922218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.922436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.922496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.922772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.922839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.923117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.923184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.923443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.923506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.923727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.923788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.923991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.924057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.924337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.924401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.924670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.924732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.924972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.925038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.925296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.925361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.925609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.925670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.925941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.926023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.926249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.926308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.926554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.926616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.926808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.926869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.927052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.927111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.927310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.927369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.927592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.927654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.927925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.927985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.928248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.928308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.928491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.928565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.928823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.928889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.929132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.929197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.586 qpair failed and we were unable to recover it. 00:36:06.586 [2024-11-17 11:30:30.929445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.586 [2024-11-17 11:30:30.929510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.929750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.929819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.930111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.930177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.930469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.930559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.930772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.930838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.931102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.931169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.931410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.931475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.931796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.931894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.932154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.932223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.932470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.932560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.932860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.932924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.933138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.933205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.933469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.933557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.933869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.933933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.934172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.934235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.934438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.934503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.934766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.935033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.935100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.935299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.935363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.935591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.935658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.935914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.935979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.936172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.936236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.936553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.936619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.936857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.936922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.937151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.937215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.937485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.937568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.937813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.937878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.587 [2024-11-17 11:30:30.938132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.587 [2024-11-17 11:30:30.938195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.587 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.938450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.938514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.938729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.938795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.939092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.939158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.939402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.939466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.939743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.939808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.940069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.940132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.940421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.940486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.940726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.940790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.941029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.941093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.941345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.941410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.941699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.941765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.942020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.942083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.942311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.942375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.942638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.942705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.942969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.943036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.943328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.943410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.943636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.943702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.943950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.944014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.944227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.944293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.944582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.944647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.944867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.944932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.945222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.945287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.945548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.945613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.945850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.945915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.946156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.946220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.946469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.946552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.946753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.946818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.947032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.947096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.947334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.947398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.947588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.588 [2024-11-17 11:30:30.947655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.588 qpair failed and we were unable to recover it. 00:36:06.588 [2024-11-17 11:30:30.947828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.947894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.948143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.948207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.948432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.948497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.948813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.948878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.949122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.949186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.949397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.949461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.949730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.949796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.950085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.950148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.950394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.950459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.950766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.950832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.951097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.951163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.951418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.951482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.951792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.951858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.952103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.952168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.952419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.952483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.952757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.952823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.953081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.953144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.953431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.953496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.953809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.953874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.954126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.954190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.954430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.954495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.954732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.954798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.955036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.955099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.955361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.955425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.955660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.955726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.955968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.956307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.956370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.956590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.956659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.956909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.956975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.957234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.957299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.957607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.957674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.957907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.589 [2024-11-17 11:30:30.957971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.589 qpair failed and we were unable to recover it. 00:36:06.589 [2024-11-17 11:30:30.958166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.958231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.958487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.958567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.958797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.958861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.959145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.959208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.959498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.959596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.959839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.959903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.960146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.960211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.960475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.960560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.960831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.960895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.961129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.961193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.961447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.961511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.961754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.961843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.962142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.962206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.962449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.962517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.962771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.962859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.963217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.963312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.963584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.963653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.963858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.963922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.964204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.964268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.964469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.964555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.964861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.964925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.965163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.965227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.965464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.965543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.965771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.965836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.966089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.966155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.966443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.966508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.966830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.966894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.967155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.967221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.967448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.967512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.967772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.967837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.968131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.590 [2024-11-17 11:30:30.968196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.590 qpair failed and we were unable to recover it. 00:36:06.590 [2024-11-17 11:30:30.968442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.968505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.968813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.968878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.969121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.969195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.969440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.969505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.969773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.969839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.970077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.970141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.970430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.970495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.970740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.970803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.971086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.971150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.971408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.971473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.971703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.971769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.971991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.972056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.972277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.972341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.972559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.972627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.972820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.972884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.973135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.973200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.973460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.973547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.973802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.973866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.974077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.974142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.974393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.974460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.974697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.974763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.974982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.975046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.975293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.975360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.975582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.975648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.975862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.976130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.976194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.976407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.976471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.976775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.976841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.977131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.977194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.977446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.977511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.977731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.977796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.978024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.978087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.978330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.978393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.978642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.978710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.978999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.979064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.979312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.979376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.979611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.979678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.979921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.591 [2024-11-17 11:30:30.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.591 qpair failed and we were unable to recover it. 00:36:06.591 [2024-11-17 11:30:30.980278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.980343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.980554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.980621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.980864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.980932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.981191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.981255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.981515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.981607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.981816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.981881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.982121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.982185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.982385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.982448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.982665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.982730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.982982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.983046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.983275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.983341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.983600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.983666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.983955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.984020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.984274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.984337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.984630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.984916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.984981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.985197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.985260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.985514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.985596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.985856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.985921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.986151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.986215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.986401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.986464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.986729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.986794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.987016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.987080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.987326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.987390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.987639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.987704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.988001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.988065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.988368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.988577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.988642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.988893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.988960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.989208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.989275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.989571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.989636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.989922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.989987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.990241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.990306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.990516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.990598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.990892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.990956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.991202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.991268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.991515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.991620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.592 [2024-11-17 11:30:30.991859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.592 [2024-11-17 11:30:30.991923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.592 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.992177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.992242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.992551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.992617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.992858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.992922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.993188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.993252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.993511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.993597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.993881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.993945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.994187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.994262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.994568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.994634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.994930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.994994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.995287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.995351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.995609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.995969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.996034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.996334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.996399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.996697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.996764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.997013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.997079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.997330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.997397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.997669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.997735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.998012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.998076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.998342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.998408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.998639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.998705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.999011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.999075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.999315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.999379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.999586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.999653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:30.999916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:30.999980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.000232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.000296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.000505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.000588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.000874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.000938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.001181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.001245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.001456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.001521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.001778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.001843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.002088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.002152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.002423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.002487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.002772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.002839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.003141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.003206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.003499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.003600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.003869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.003935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.004204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.593 [2024-11-17 11:30:31.004268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.593 qpair failed and we were unable to recover it. 00:36:06.593 [2024-11-17 11:30:31.004566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.004632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.004868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.004934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.005222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.005285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.005551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.005618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.005840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.005905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.006201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.006266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.006507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.006586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.006845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.006910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.007116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.007180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.007371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.007452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.007693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.007759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.008017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.008084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.008263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.008328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.008587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.008653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.008855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.008923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.009122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.009186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.009424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.009490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.009812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.009879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.010097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.010163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.010411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.010476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.010747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.010815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.011079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.011143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.011338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.011402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.011677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.011744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.011994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.012059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.012349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.012414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.012695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.012761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.012962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.013025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.013283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.013350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.013559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.013625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.013822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.013889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.014164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.014228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.014540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.014626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.594 qpair failed and we were unable to recover it. 00:36:06.594 [2024-11-17 11:30:31.014875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.594 [2024-11-17 11:30:31.014941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.015196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.015261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.015504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.015602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.015857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.015923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.016101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.016167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.016417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.016484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.016798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.016863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.017156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.017222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.017466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.017546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.017844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.017910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.018128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.018193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.018400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.018466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.018781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.018848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.019043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.019109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.019366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.019430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.019701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.019768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.020012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.020086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.020341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.020406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.020704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.020770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.021069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.021134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.021430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.021494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.021774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.021839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.022087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.022154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.022450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.022515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.022772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.022837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.023035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.023100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.023344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.023410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.023715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.023781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.024036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.024101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.024350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.024417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.024696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.024764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.025015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.025080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.025332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.025398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.025690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.025757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.026048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.026114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.026366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.026430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.026747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.026813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.027118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.595 [2024-11-17 11:30:31.027182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.595 qpair failed and we were unable to recover it. 00:36:06.595 [2024-11-17 11:30:31.027440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.027505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.027774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.027841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.028060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.028124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.028410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.028737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.028803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.029051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.029116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.029354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.029419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.029674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.029740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.030023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.030087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.030341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.030406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.030665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.030733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.030977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.031042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.031283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.031349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.031603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.031669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.031902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.031969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.032223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.032289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.032552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.032619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.032878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.032944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.033225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.033299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.033554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.033622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.033831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.033896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.034142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.034208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.034511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.034597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.034790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.035117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.035180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.035443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.035509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.035727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.035792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.036040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.036104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.036401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.036466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.036783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.036849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.037145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.037210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.037504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.037603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.037818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.037883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.038174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.038239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.038491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.038578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.038802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.038867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.039116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.039181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.039419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.596 [2024-11-17 11:30:31.039484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.596 qpair failed and we were unable to recover it. 00:36:06.596 [2024-11-17 11:30:31.039763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.039827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.040082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.040338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.040403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.040600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.040667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.040911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.040975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.041269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.041335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.041628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.041695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.041995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.042059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.042307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.042372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.042616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.042682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.042970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.043034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.043285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.043350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.043563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.043631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.043831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.043898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.044102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.044168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.044383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.044448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.044724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.044790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.044963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.045027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.045221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.045286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.045471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.045572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.045791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.045868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.046129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.046194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.046484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.046570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.046863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.046928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.047217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.047282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.047505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.047590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.047849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.047915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.048206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.048270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.048515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.048600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.048894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.048960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.049209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.049272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.049473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.049555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.049862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.049926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.050178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.050242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.050457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.050522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.050839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.050904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.051207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.051273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.597 [2024-11-17 11:30:31.051542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.597 [2024-11-17 11:30:31.051609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.597 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.051850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.051915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.052167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.052232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.052481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.052565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.052820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.052885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.053172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.053439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.053503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.053797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.053864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.054072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.054136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.054388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.054455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.054774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.054842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.055140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.055205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.055496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.055581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.055878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.055945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.056227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.056292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.056585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.056652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.056951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.057015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.057304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.057368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.057580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.057648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.057928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.057994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.058285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.058349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.058639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.058707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.058954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.059018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.059306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.059381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.059704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.059771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.060019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.060085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.060298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.060363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.060568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.060638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.060899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.060963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.061207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.061272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.061475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.061572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.061864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.061929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.062180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.062244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.062550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.062616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.062867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.062932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.063175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.063239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.063472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.063550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.063854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.598 [2024-11-17 11:30:31.063919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.598 qpair failed and we were unable to recover it. 00:36:06.598 [2024-11-17 11:30:31.064210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.064274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.064472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.064556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.064775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.064840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.065088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.065152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.065407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.065472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.065785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.065852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.066136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.066202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.066453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.066520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.066832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.066898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.067156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.067220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.067514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.067599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.067860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.067925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.068185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.068250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.068500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.068586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.068846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.068911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.069200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.069263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.069511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.069614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.069862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.069926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.070160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.070225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.070484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.070571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.070839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.070905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.071159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.071224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.071480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.071563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.071810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.071875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.072116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.072180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.072463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.072567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.072827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.072894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.073082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.073147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.073370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.073436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.073738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.073804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.074057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.074121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.074393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.074458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.599 [2024-11-17 11:30:31.074728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.599 [2024-11-17 11:30:31.074793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.599 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.075023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.075087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.075270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.075336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.075561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.075627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.075910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.075976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.076224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.076292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.076594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.076660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.076933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.076998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.077247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.077314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.077601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.077667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.077880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.077944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.078226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.078290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.078547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.078613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.078867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.078931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.079121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.079187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.079426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.079491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.079754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.079820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.080070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.080134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.080418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.080482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.080699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.080767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.081065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.081130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.081419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.081484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.081749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.081815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.082037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.082101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.082303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.082367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.082655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.082722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.083024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.083088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.083315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.083379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.083555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.083909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.083973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.084224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.084289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.084471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.084550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.084755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.084818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.085098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.085163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.085426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.085493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.085759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.085823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.086071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.086135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.086392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.600 [2024-11-17 11:30:31.086458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.600 qpair failed and we were unable to recover it. 00:36:06.600 [2024-11-17 11:30:31.086775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.086841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.087123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.087188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.087473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.087558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.087804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.087869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.088116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.088182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.088399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.088464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.088667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.088732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.088982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.089047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.089360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.089631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.089698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.090016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.090267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.090331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.090633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.090699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.090896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.090961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.091199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.091263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.091513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.091814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.091878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.092114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.092179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.092415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.092479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.092733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.092798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.093049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.093114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.093329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.093395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.093613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.093691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.093961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.094026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.094319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.094629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.094695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.094942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.095007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.095297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.095363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.095675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.096028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.096093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.096365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.096666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.096732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.096971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.097036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.097264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.097328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.097555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.097622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.097883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.097948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.098249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.098313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.098613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.601 [2024-11-17 11:30:31.098679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.601 qpair failed and we were unable to recover it. 00:36:06.601 [2024-11-17 11:30:31.098936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.099001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.099217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.099281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.099481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.099566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.099861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.099927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.100178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.100242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.100543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.100610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.100896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.100959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.101194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.101258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.101554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.101620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.101875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.101938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.102221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.102285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.102588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.102656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.102890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.102954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.103253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.103317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.103580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.103646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.103880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.103944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.104202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.104265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.104503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.104828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.104892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.105117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.105180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.105431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.105495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.105734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.105798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.105985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.106049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.106331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.106395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.106620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.106697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.106912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.106977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.107265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.107328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.107540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.107605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.107860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.107923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.108221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.108284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.108493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.108576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.108851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.109204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.109268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.109565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.109632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.109918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.109982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.110243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.110306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.110508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.110609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.602 qpair failed and we were unable to recover it. 00:36:06.602 [2024-11-17 11:30:31.110828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.602 [2024-11-17 11:30:31.110891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.111197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.111261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.111504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.111589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.111815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.111878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.112112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.112175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.112467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.112548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.112814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.112877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.113178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.113241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.113481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.113567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.113824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.113888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.114178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.114243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.114500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.114593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.114844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.114907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.115193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.115256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.115544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.115610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.115908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.115972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.116275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.116338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.116560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.116627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.116856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.116919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.117205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.117270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.117518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.117601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.117847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.117911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.118156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.118220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.118514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.118616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.118833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.118896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.119103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.119170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.119421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.119486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.119764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.119839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.120033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.120096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.120366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.120429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.120679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.120744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.120956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.121020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.121273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.121339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.121603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.121668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.121949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.122013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.122221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.122284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.122537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.122602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.122856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.603 [2024-11-17 11:30:31.122919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.603 qpair failed and we were unable to recover it. 00:36:06.603 [2024-11-17 11:30:31.123171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.123235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.123521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.123599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.123849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.123912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.124194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.124259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.124553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.124617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.124815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.124879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.125068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.125132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.125419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.125482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.125738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.125802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.126013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.126077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.126317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.126381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.126668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.126734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.127017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.127081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.127324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.127386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.127671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.127735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.128033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.128097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.128408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.128472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.128746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.128810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.129098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.129162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.129420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.129484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.129739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.129802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.130044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.130108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.130349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.130414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.130595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.130661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.130899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.130963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.131228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.131293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.131560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.131626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.131922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.131986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.132269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.132335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.132583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.132658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.132940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.133003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.133205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.133269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.133563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.604 [2024-11-17 11:30:31.133629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.604 qpair failed and we were unable to recover it. 00:36:06.604 [2024-11-17 11:30:31.133844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.133910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.134104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.134167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.134420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.134485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.134759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.134824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.135065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.135128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.135376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.135440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.135721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.135789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.136031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.136095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.136361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.136426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.136688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.136753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.137072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.137137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.137384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.137447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.137695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.137760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.137983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.138047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.138290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.138354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.138594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.138660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.138864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.138931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.139178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.139245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.139491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.139569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.139864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.139928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.140221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.140286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.140545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.140610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.140911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.140975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.141281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.141346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.141560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.141628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.141857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.141921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.142117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.142182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.142476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.142560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.142862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.142927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.143140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.143203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.143494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.143597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.143848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.143913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.144193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.144256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.144516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.144601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.144851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.144935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.145203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.145266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.145482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.145579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.145842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.605 [2024-11-17 11:30:31.145907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.605 qpair failed and we were unable to recover it. 00:36:06.605 [2024-11-17 11:30:31.146152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.146215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.146454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.146517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.146786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.146851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.147055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.147118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.147401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.147465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.147692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.147758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.148046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.148109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.148355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.148421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.148742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.148808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.149046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.149110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.149359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.149423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.149682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.149747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.150038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.150102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.150358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.150422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.150693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.150758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.151014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.151078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.151358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.151422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.151691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.151757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.152021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.152085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.152381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.152446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.152738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.152803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.153053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.153120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.153336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.153401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.153648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.153714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.153968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.154032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.154328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.154392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.154604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.154670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.154957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.155021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.155318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.155382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.155635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.155701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.155954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.156018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.156284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.156347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.156543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.156609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.156878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.156942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.157183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.157246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.157549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.157617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.157873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.157937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.606 qpair failed and we were unable to recover it. 00:36:06.606 [2024-11-17 11:30:31.158219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.606 [2024-11-17 11:30:31.158284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.158551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.158627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.158913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.158978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.159273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.159336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.159559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.159625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.159866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.159931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.160125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.160188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.160443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.160506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.160815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.160881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.161177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.161241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.161549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.161616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.161856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.161919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.162167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.162233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.162482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.162563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.162818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.162882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.163124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.163189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.163434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.163498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.163748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.163812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.164006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.164070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.164280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.164344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.164600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.164666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.164920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.164985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.165203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.165266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.165516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.165594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.165846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.165910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.166144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.166460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.166542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.166866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.166931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.167200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.167263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.167556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.167621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.167861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.167926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.168181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.168244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.168521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.168616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.168904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.168969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.169174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.169238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.169543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.169609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.169848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.169912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.170203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.607 [2024-11-17 11:30:31.170269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.607 qpair failed and we were unable to recover it. 00:36:06.607 [2024-11-17 11:30:31.170577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.170643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.170893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.170957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.171206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.171270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.171567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.171643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.171886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.171952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.172148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.172214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.172459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.172522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.172806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.172872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.173149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.173212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.173466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.173547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.173802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.173867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.174126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.174190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.174427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.174491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.174736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.174801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.175052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.175117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.175373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.175435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.175751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.175818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.176025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.176090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.176379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.176444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.176716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.176782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.177029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.177093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.177376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.177441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.177655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.177721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.177912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.177976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.178228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.178292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.178505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.178587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.178809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.178871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.179123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.179188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.179395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.179460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.179739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.179805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.180071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.180136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.180422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.180487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.180763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.180827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.608 qpair failed and we were unable to recover it. 00:36:06.608 [2024-11-17 11:30:31.181076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.608 [2024-11-17 11:30:31.181140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.181428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.181492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.181712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.181779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.182041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.182106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.182294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.182359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.182557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.182623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.182845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.182911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.183122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.183187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.183454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.183517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.183788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.183853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.184144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.184219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.184417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.184484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.184757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.184822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.185069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.185132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.185370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.185437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.185688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.185753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.185996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.186061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.186284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.186349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.186572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.186638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.186924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.186989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.187234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.187298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.187544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.187609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.187861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.187925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.188166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.188231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.188439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.188505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.188796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.188862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.189118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.189182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.189392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.189457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.189714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.189780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.190038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.190102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.190368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.190431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.190711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.190776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.191022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.191086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.191364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.191429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.191658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.191725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.191966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.192033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.192316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.192380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.192590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.609 [2024-11-17 11:30:31.192657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.609 qpair failed and we were unable to recover it. 00:36:06.609 [2024-11-17 11:30:31.192866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.192932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.193144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.193210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.193402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.193466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.193677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.193743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.194031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.194097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.194312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.194376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.194628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.194695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.194922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.194988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.195277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.195341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.195552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.195618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.195865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.195929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.196173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.196240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.196467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.196559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.196798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.196862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.197071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.197136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.197376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.197441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.197707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.197774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.198022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.198086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.198364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.198428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.198662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.198730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.198919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.198984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.199224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.199288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.199552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.199617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.199812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.199878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.200082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.200148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.200437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.200502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.200793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.200859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.201048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.201114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.201342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.201405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.201666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.201733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.201928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.201994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.202272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.202336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.202586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.202651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.202838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.202902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.203140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.203204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.203419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.203482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.203774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.203839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.204126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.610 [2024-11-17 11:30:31.204190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.610 qpair failed and we were unable to recover it. 00:36:06.610 [2024-11-17 11:30:31.204442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.204506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.204861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.204985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.205252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.205321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.205615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.205690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.206030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.206099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.206362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.206431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.206748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.206816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.207075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.207141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.207509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.207647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.208030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.208124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.208437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.208548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.208857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.208944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.209266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.209337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.209624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.209701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.209925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.209991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.210269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.210359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.210679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.210767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.211082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.211169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.211546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.211636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.212003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.212090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.212437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.212508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.212847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.212920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.213210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.213274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.213574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.213663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.214004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.214438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.214521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.214815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.214904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.215217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.215307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.215648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.215768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.216105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.216184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.216426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.216496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.216784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.216854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.217099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.217165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.217397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.217461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.217743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.217810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.218065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.218147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.218454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.218587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.218863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.611 [2024-11-17 11:30:31.218932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.611 qpair failed and we were unable to recover it. 00:36:06.611 [2024-11-17 11:30:31.219164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.219228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.612 [2024-11-17 11:30:31.219516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.219606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.612 [2024-11-17 11:30:31.219863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.219929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.612 [2024-11-17 11:30:31.220180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.220258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.612 [2024-11-17 11:30:31.220515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.220614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.612 [2024-11-17 11:30:31.220856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.220946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.612 [2024-11-17 11:30:31.221217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.612 [2024-11-17 11:30:31.221303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.612 qpair failed and we were unable to recover it. 00:36:06.892 [2024-11-17 11:30:31.221567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.892 [2024-11-17 11:30:31.221635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.892 qpair failed and we were unable to recover it. 00:36:06.892 [2024-11-17 11:30:31.221865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.892 [2024-11-17 11:30:31.221931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.892 qpair failed and we were unable to recover it. 00:36:06.892 [2024-11-17 11:30:31.222131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.892 [2024-11-17 11:30:31.222194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.892 qpair failed and we were unable to recover it. 00:36:06.892 [2024-11-17 11:30:31.222383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.892 [2024-11-17 11:30:31.222447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.892 qpair failed and we were unable to recover it. 00:36:06.892 [2024-11-17 11:30:31.222688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.892 [2024-11-17 11:30:31.222768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.892 qpair failed and we were unable to recover it. 00:36:06.892 [2024-11-17 11:30:31.223023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.892 [2024-11-17 11:30:31.223088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.892 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.223301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.223365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.223656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.223890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.223954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.224153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.224217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.224419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.224482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.224767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.224832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.225091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.225157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.225403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.225467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.225700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.225765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.225940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.226005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.226211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.226275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.226502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.226589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.226843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.226908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.227160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.227227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.227437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.227501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.227773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.227838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.228044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.228110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.228370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.228434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.228670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.228739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.229057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.229289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.229354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.229597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.229878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.229942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.230232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.230296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.230513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.230606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.230825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.230892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.231141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.231207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.231506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.231588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.231880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.231944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.232172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.232237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.232433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.232507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.232742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.232807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.233026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.233089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.233308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.233373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.233612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.233678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.233875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.233939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.234179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.893 [2024-11-17 11:30:31.234244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.893 qpair failed and we were unable to recover it. 00:36:06.893 [2024-11-17 11:30:31.234490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.234575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.234766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.234831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.235043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.235111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.235361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.235427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.235653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.235720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.235974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.236039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.236263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.236327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.236557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.236625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.236836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.236900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.237152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.237218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.237465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.237558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.237810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.237875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.238108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.238171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.238421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.238485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.238728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.238792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.238996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.239063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.239278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.239341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.239674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.239961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.240026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.240245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.240307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.240582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.240648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.240944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.241008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.241252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.241543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.241609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.241857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.241921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.242175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.242238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.242495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.242576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.242863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.242928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.243145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.243208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.243456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.243522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.243836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.243901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.244140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.244204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.244402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.244467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.244737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.244813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.245110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.245174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.245479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.245562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.245814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.245878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.246052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.894 [2024-11-17 11:30:31.246116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.894 qpair failed and we were unable to recover it. 00:36:06.894 [2024-11-17 11:30:31.246365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.246429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.246757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.246825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.247118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.247184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.247444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.247507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.247839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.247903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.248154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.248220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.248477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.248560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.248826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.248891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.249175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.249241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.249560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.249627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.249848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.249913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.250172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.250237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.250561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.250627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.250874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.250938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.251156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.251219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.251469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.251549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.251801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.251866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.252129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.252194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.252496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.252576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.252870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.252933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.253179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.253244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.253502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.253586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.253836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.253909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.254158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.254222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.254481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.254565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.254820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.254882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.255172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.255237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.255489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.255574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.255816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.255880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.256096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.256161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.256460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.256544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.256843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.256907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.257161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.257225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.257514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.257596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.257838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.257906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.258193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.258258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.258589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.895 [2024-11-17 11:30:31.258655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.895 qpair failed and we were unable to recover it. 00:36:06.895 [2024-11-17 11:30:31.258898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.258964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.259244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.259308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.259542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.259607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.259830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.259893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.260210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.260454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.260519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.260730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.260795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.261039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.261102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.261362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.261425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.261682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.261747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.261979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.262044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.262403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.262690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.262756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.263001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.263064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.263304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.263367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.263653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.263719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.263968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.264030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.264282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.264345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.264602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.264668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.264958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.265021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.265308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.265373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.265604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.265670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.265872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.265937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.266119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.266184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.266377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.266442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.266756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.266832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.267122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.267187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.267443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.267505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.267815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.267879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.268152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.268216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.268510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.268591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.268804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.268869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.269131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.269202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.269463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.269549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.269760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.269828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.270162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.270253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.270563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.270635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.270899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.896 [2024-11-17 11:30:31.270964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.896 qpair failed and we were unable to recover it. 00:36:06.896 [2024-11-17 11:30:31.271259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.271323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.271633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.271699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.271963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.272027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.272329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.272392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.272594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.272660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.272953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.273018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.273228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.273550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.273808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.273874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.274122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.274185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.274483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.274583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.274829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.274893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.275140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.275206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.275502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.275589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.275819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.275884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.276178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.276242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.276485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.276571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.276780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.276846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.277098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.277162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.277412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.277475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.277719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.278025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.278090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.278382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.278446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.278754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.278819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.279071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.279138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.279380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.279704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.279772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.280068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.280144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.280395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.280461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.280779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.280845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.281082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.281147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.897 [2024-11-17 11:30:31.281391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.897 [2024-11-17 11:30:31.281454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.897 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.281686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.281753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.282009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.282074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.282292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.282355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.282589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.282655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.282901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.282967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.283251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.283315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.283565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.283633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.283931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.284201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.284268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.284481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.284560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.284822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.284887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.285181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.285245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.285549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.285614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.285872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.285936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.286232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.286296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.286490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.286594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.286844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.286911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.287208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.287271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.287566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.287633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.287866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.287933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.288226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.288290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.288548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.288613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.288854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.288919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.289221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.289285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.289542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.289607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.289869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.289932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.290151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.290214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.290459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.290561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.290818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.290881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.291173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.291237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.291491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.291578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.291818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.291882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.292170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.292235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.292437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.292502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.292818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.292883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.293133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.293208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.293465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.293549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.898 qpair failed and we were unable to recover it. 00:36:06.898 [2024-11-17 11:30:31.293817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.898 [2024-11-17 11:30:31.293882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.294091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.294155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.294360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.294424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.294672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.294737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.294992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.295055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.295345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.295409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.295667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.295733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.296036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.296101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.296344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.296407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.296651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.296716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.297006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.297071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.297367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.297431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.297755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.297821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.298035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.298100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.298389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.298452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.298722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.298787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.299072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.299136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.299383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.299449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.299761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.299825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.300116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.300181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.300427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.300493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.300769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.300833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.301079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.301143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.301392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.301456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.301735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.301799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.302053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.302119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.302408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.302473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.302788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.302853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.303139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.303202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.303457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.303522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.303846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.303911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.304126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.304191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.304480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.304563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.304777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.304841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.305062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.305126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.305424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.305489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.305712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.305779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.306069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.306134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.899 qpair failed and we were unable to recover it. 00:36:06.899 [2024-11-17 11:30:31.306375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.899 [2024-11-17 11:30:31.306448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.306705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.306771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.307024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.307087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.307376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.307440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.307744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.307811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.308051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.308115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.308369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.308433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.308761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.308827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.309129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.309192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.309503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.309587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.309844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.309908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.310206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.310270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.310559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.310625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.310924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.310988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.311240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.311306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.311559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.311626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.311872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.311936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.312182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.312246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.312497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.312576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.312822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.312885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.313183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.313247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.313552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.313618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.313914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.313978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.314225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.314289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.314587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.314653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.314916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.314982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.315239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.315303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.315611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.315677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.315971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.316036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.316279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.316347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.316642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.316708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.316964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.317027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.317278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.317342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.317599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.317664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.317901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.317966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.318171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.318235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.318540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.318606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.318892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.900 [2024-11-17 11:30:31.318955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.900 qpair failed and we were unable to recover it. 00:36:06.900 [2024-11-17 11:30:31.319175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.319239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.319473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.319555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.319785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.319860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.320105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.320169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.320452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.320516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.320778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.320841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.321080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.321143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.321355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.321422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.321687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.321752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.322034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.322099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.322351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.322416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.322740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.322807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.323065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.323132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.323386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.323450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.323711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.323777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.324054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.324119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.324383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.324448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.324674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.324740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.324995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.325060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.325344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.325409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.325661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.325726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.326015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.326078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.326318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.326383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.326693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.326760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.327045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.327109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.327319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.327382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.327583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.327649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.327913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.327977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.328237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.328304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.328565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.328633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.328879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.328943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.329174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.329240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.329491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.329570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.329814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.329878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.330175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.330238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.330476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.330572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.901 qpair failed and we were unable to recover it. 00:36:06.901 [2024-11-17 11:30:31.330803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.901 [2024-11-17 11:30:31.330867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.331056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.331119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.331383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.331447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.331763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.331828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.332079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.332143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.332436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.332499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.332728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.332803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.333078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.333142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.333398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.333465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.333709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.333775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.334023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.334087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.334376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.334438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.334704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.334769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.335059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.335124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.335366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.335429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.335703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.335769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.336022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.336089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.336343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.336406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.336701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.336766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.337030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.337098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.337397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.337461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.337739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.337807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.338069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.338134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.338382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.338448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.338739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.338805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.339095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.339159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.339373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.339436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.339698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.339766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.340015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.340081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.340307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.340677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.340743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.341040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.341106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.341354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.341417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.341714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.341779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.342073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.342139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.902 qpair failed and we were unable to recover it. 00:36:06.902 [2024-11-17 11:30:31.342430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.902 [2024-11-17 11:30:31.342494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.342795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.342871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.343071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.343138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.343390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.343454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.343735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.343802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.344084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.344150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.344407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.344472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.344789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.344865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.345068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.345132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.345374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.345439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.345679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.345744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.345985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.346060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.346316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.346380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.346680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.346747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.347052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.347116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.347359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.347425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.347700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.347767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.348060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.348126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.348421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.348486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.348770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.348847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.349067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.349129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.349352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.349418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.349636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.349672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.349828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.349863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.349987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.350022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.350155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.350190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.350312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.350346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.350492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.350538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.350666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.350700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.350819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.350852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.350992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.351026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.351166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.351200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.351336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.351370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.351513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.351559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.351703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.351736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.351854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.351887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.351990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.352024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.352159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.903 [2024-11-17 11:30:31.352192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.903 qpair failed and we were unable to recover it. 00:36:06.903 [2024-11-17 11:30:31.352339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.352373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.352478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.352512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.352638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.352672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.352783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.352817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.352947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.352981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.353125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.353157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.353265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.353298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.353440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.353474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.353618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.353651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.353763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.353796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.353929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.353961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.354092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.354124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.354237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.354269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.354416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.354453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.354581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.354615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.354781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.354813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.354921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.354953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.355045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.355077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.355181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.355213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.355313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.355344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.355517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.355593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.355850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.355913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.357449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.357543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.357699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.357746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.357855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.357883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.358879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.358975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.359003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.359088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.359116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.359244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.359272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.359388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.359415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.359541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.359570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.904 [2024-11-17 11:30:31.359665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.904 [2024-11-17 11:30:31.359697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.904 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.359788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.359816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.359914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.359943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.360930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.360958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.361080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.361108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.361199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.361227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.361372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.361400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.361484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.361512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.361658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.361691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.361831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.361868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.362028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.362079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.362218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.362264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.362355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.362384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.362512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.362551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.362667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.362715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.362886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.362937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.363899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.363932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.364100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.364133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.364285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.364314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.364405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.364433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.364537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.364565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.364671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.364703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.364804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.364832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.365000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.365047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.365082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eec970 (9): Bad file descriptor 00:36:06.905 [2024-11-17 11:30:31.365259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.365411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.905 [2024-11-17 11:30:31.365457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.905 qpair failed and we were unable to recover it. 00:36:06.905 [2024-11-17 11:30:31.365581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.365616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.365724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.365756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.365863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.365895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.366027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.366089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.366247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.366279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.366397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.366424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.366511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.366548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.366681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.366709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.366805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.366833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.367046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.367262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.367369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.367531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.367649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.367766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.367935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.368121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.368277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.368437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.368607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.368743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.368917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.368951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.369056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.369088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.369194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.369227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.369329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.369361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.369486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.369520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.369649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.369678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.369803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.369862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.370044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.370090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.370178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.370205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.370299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.370327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.370415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.370443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.370582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.370616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.906 [2024-11-17 11:30:31.370722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.906 [2024-11-17 11:30:31.370751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.906 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.370872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.370900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.371044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.371072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.371187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.371220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.371350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.371382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.371516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.371589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.371712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.371745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.371899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.371933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.372120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.372181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.372479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.372552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.372684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.372712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.372810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.372839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.373010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.373071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.373262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.373324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.373515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.373581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.373695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.373723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.373816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.373843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.373931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.373958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.374951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.374980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.375127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.375160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.375283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.375311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.375427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.375456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.375588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.375616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.375707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.375735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.375866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.375898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.376052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.376090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.376308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.376359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.376506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.376548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.376676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.376704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.376796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.376824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.376928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.376961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.377128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.377189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.907 qpair failed and we were unable to recover it. 00:36:06.907 [2024-11-17 11:30:31.377391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.907 [2024-11-17 11:30:31.377454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.377638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.377668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.377790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.377818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.377932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.377979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.378141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.378195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.378325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.378358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.378519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.378554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.378666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.378698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.378854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.378886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.379967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.379995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.380109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.380137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.380229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.380257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.380406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.380435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.380571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.380620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.380765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.380815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.380977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.381008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.381173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.381205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.381406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.381468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.381637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.381667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.381798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.381847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.382015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.382074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.382320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.382356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.382558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.382589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.382710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.382737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.382907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.382960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.383179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.383230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.383352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.383380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.383496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.383532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.383645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.383678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.383794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.383823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.383939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.908 [2024-11-17 11:30:31.383967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.908 qpair failed and we were unable to recover it. 00:36:06.908 [2024-11-17 11:30:31.384055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.384083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.384199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.384227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.384336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.384377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.384542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.384572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.384699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.384727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.384850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.384879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.384995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.385023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.385137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.385165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.385345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.385402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.385625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.385667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.385766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.385795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.386040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.386094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.386356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.386390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.386494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.386538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.386662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.386689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.386781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.386811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.386915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.386949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.387140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.387211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.387385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.387444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.387640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.387670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.387789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.387821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.387980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.388013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.388160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.388207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.388366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.388422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.388645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.388673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.388762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.388792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.388900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.388934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.389121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.389177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.389401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.389459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.389632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.389665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.389754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.389782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.389882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.389954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.390176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.390232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.390504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.390590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.390679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.390707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.390836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.390863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.390981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.391008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.391092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.909 [2024-11-17 11:30:31.391121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.909 qpair failed and we were unable to recover it. 00:36:06.909 [2024-11-17 11:30:31.391300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.391356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.391594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.391622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.391737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.391765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.391921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.391977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.392226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.392282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.392476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.392536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.392703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.392730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.392883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.392911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.393065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.393092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.393270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.393338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.393473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.393503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.393637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.393666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.393774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.393807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.393937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.393996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.394117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.394174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.394305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.394333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.394449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.394476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.394559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.394587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.394737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.394765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.394861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.394888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.395882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.395975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.396114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.396228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.396409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.396582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.396707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.396858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.396887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.397038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.397066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.397187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.397215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.397302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.397331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.910 [2024-11-17 11:30:31.397462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.910 [2024-11-17 11:30:31.397504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.910 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.397684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.397726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.398006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.398067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.398328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.398361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.398554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.398615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.398710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.398739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.398846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.398874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.399048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.399171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.399346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.399545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.399689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.399868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.399989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.400019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.400140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.400169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.400310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.400340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.400438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.400470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.400638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.400681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.400792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.400843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.400986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.401032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.401139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.401201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.401291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.401325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.401422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.401450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.401625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.401657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.401795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.401864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.402964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.402992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.403142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.403172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.403302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.911 [2024-11-17 11:30:31.403332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.911 qpair failed and we were unable to recover it. 00:36:06.911 [2024-11-17 11:30:31.403488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.403535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.403643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.403672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.403762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.403791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.403893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.403921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.404882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.404914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.405068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.405113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.405249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.405295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.405439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.405466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.405651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.405701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.405818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.405867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.405989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.406142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.406300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.406451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.406597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.406743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.406867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.406896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.407069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.407113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.407237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.407362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.407390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.407482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.407509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.407623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.407660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.407848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.407910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.408076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.408129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.408245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.408273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.408366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.408396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.408540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.408582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.408687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.408728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.408849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.408932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.409071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.409103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.409242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.409275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.912 [2024-11-17 11:30:31.409424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.912 [2024-11-17 11:30:31.409454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.912 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.409580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.409610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.409725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.409771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.409941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.410000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.410163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.410214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.410340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.410368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.410491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.410519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.410673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.410702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.410839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.410886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.410985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.411882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.411995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.412872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.412962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.413870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.413969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.414110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.414264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.414447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.414608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.414724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.414839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.414868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.415020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.415051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.913 qpair failed and we were unable to recover it. 00:36:06.913 [2024-11-17 11:30:31.415139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.913 [2024-11-17 11:30:31.415169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.415288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.415318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.415437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.415465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.415628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.415657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.415743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.415772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.415914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.415945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.416073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.416104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.416236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.416266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.416365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.416396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.416537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.416582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.416670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.416699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.416881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.417874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.417978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.418160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.418383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.418516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.418643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.418756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.418895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.418940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.419960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.419987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.420102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.420130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.420254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.420282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.420398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.420425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.420550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.914 [2024-11-17 11:30:31.420579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.914 qpair failed and we were unable to recover it. 00:36:06.914 [2024-11-17 11:30:31.420659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.420687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.420799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.420826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.420937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.420965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.421932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.421959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.422106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.422134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.422280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.422425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.422467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.422623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.422657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.422752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.422832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.423034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.423081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.423294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.423361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.423497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.423539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.423690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.423737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.423866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.423910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.424953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.424998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.425135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.425189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.425302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.425330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.425448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.425476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.425567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.425596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.425709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.915 [2024-11-17 11:30:31.425737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.915 qpair failed and we were unable to recover it. 00:36:06.915 [2024-11-17 11:30:31.425867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.425895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.426899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.426993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.427129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.427267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.427420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.427598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.427724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.427875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.427915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.428128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.428180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.428384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.428432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.428612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.428643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.428766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.428795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.428966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.429194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.429418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.429557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.429664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.429811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.429957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.429985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.430933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.430965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.431051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.431079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.431205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.431233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.431352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.431380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.431530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.431560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.916 [2024-11-17 11:30:31.431689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.916 [2024-11-17 11:30:31.431716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.916 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.431810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.431838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.431922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.431950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.432848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.432984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.433123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.433238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.433398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.433574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.433748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.433859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.433904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.434069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.434231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.434386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.434513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.434667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.434846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.434982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.435115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.435262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.435384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.435550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.435703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.435880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.435910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.436965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.436993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.917 qpair failed and we were unable to recover it. 00:36:06.917 [2024-11-17 11:30:31.437143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.917 [2024-11-17 11:30:31.437194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.437311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.437338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.437487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.437515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.437617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.437645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.437725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.437753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.437839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.437867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.437957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.437986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.438882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.438910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.439931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.439959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.440963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.440991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.441090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.441123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.441215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.441245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.441344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.441386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.441481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.441510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.441654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.441698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.441896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.441960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.442102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.442176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.442313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.442357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.918 [2024-11-17 11:30:31.442520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.918 [2024-11-17 11:30:31.442562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.918 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.442653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.442685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.442807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.442836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.442965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.443010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.443168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.443227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.443443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.443487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.443613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.443643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.443795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.443823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.443955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.444137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.444470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.444652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.444794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.444926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.444953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.445881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.445999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.446869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.446898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.447941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.447986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.919 [2024-11-17 11:30:31.448186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.919 [2024-11-17 11:30:31.448242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.919 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.448375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.448405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.448541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.448570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.448703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.448748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.448838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.449011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.449065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.449199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.449241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.449408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.449449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.449604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.449650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.449786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.449818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.449964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.450207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.450362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.450533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.450683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.450823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.450967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.450995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.451924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.451964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.452191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.452226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.452358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.452392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.452538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.452585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.452724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.452756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.452893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.452923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.453050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.453081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.453189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.453218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.920 [2024-11-17 11:30:31.453307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.920 [2024-11-17 11:30:31.453334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.920 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.453451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.453478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.453586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.453616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.453713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.453742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.453873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.453901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.454851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.454882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.455891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.455918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.456093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.456133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.456319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.456369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.456462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.456489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.456587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.456615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.456745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.456773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.456903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.456930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.457034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.457064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.457239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.457285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.457520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.457555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.457674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.457702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.457801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.457828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.457944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.457971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.458145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.458173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.458310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.458362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.458503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.458573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.458679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.458710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.458806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.458835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.458958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.458994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.459119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.459147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.921 qpair failed and we were unable to recover it. 00:36:06.921 [2024-11-17 11:30:31.459286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.921 [2024-11-17 11:30:31.459317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.459427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.459456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.459579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.459609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.459697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.459725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.459849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.459879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.460061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.460227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.460368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.460506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.460658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.460797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.460965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.461007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.461195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.461235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.461430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.461488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.461677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.461800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.461830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.461923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.461951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.462067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.462112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.462308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.462357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.462451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.462481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.462648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.462690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.462793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.462822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.462936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.462964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.463069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.463099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.463289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.463318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.463448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.463497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.463648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.463677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.463772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.463800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.463929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.463958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.464059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.464100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.464241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.464271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.464391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.464422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.464538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.464587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.464737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.464765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.464862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.464908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.465040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.465069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.465177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.465222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.922 [2024-11-17 11:30:31.465350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.922 [2024-11-17 11:30:31.465380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.922 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.465507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.465545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.465664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.465692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.465787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.465815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.465911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.465956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.466911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.466940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.467958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.467988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.468952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.468983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.469089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.469121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.469257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.469299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.469456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.469490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.469602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.469631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.469723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.469750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.469882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.469927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.470134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.470303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.470426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.470553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.470664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.923 [2024-11-17 11:30:31.470780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.923 qpair failed and we were unable to recover it. 00:36:06.923 [2024-11-17 11:30:31.470875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.470903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.471956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.471983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.472911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.472942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.473878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.473905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.474051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.474097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.474248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.474281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.474421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.474453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.474557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.474602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.474739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.474785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.474906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.474949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.475937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.475965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.476060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.476089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.476173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.476201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.476333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.924 [2024-11-17 11:30:31.476375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.924 qpair failed and we were unable to recover it. 00:36:06.924 [2024-11-17 11:30:31.476531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.476574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.476729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.476852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.476880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.476965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.476993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.477117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.477146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.477296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.477341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.477468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.477498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.477616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.477662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.477777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.477828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.477983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.478936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.478964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.479087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.479248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.479420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.479606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.479754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.479876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.479995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.480114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.480264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.480412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.480555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.480750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.480895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.480923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.481015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.481044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.481188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.481216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.481323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.925 [2024-11-17 11:30:31.481366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.925 qpair failed and we were unable to recover it. 00:36:06.925 [2024-11-17 11:30:31.481497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.481534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.481631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.481660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.481752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.481781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.481903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.481930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.482902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.482930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.483850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.483882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.484968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.484997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.485908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.485935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.486026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.486053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.486186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.486214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.486330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.486358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.486475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.486503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.486658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.486686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.926 [2024-11-17 11:30:31.486790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.926 [2024-11-17 11:30:31.486820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.926 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.486961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.486991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.487119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.487153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.487288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.487319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.487474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.487504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.487634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.487662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.487779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.487826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.487957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.488951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.488978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.489102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.489129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.489223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.489251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.489373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.489399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.489482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.489510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.489634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.489865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.489895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.490869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.490897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.491867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.491986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.492014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.492095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.492122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.492220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.492248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.927 qpair failed and we were unable to recover it. 00:36:06.927 [2024-11-17 11:30:31.492370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.927 [2024-11-17 11:30:31.492398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.492515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.492561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.492658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.492686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.492777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.492805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.492912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.492940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.493931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.493958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.494885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.494913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.495868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.495995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.496025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.496153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.496181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.496262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.496290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.496386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.496414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.496540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.496584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.496686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.928 [2024-11-17 11:30:31.496724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.928 qpair failed and we were unable to recover it. 00:36:06.928 [2024-11-17 11:30:31.496928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.496959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.497196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.497235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.497384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.497413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.497498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.497538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.497631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.497659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.497785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.497815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.497957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.497990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.498194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.498248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.498387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.498417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.498550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.498579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.498685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.498715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.498820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.498848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.498968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.498995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.499941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.499968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.500083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.500112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.500240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.500269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.500376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.500403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.500537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.500567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.500701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.500742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.500857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.500901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.501028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.501057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.501240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.501288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.501371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.501398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.501538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.501566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.501677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.501707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.501862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.501910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.502026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.502053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.502199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.502227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.502349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.502376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.502465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.502492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.502599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.502627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.929 [2024-11-17 11:30:31.502718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.929 [2024-11-17 11:30:31.502745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.929 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.502873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.502900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.503914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.503942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.504901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.504989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.505914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.505943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.506935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.506966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.930 qpair failed and we were unable to recover it. 00:36:06.930 [2024-11-17 11:30:31.507892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.930 [2024-11-17 11:30:31.507920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.508907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.508999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.509182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.509311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.509441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.509627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.509786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.509934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.509963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.510894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.510987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.511952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.511980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.512113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.512155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.512255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.512285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.512413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.512441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.512578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.512607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.512758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.512787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.512960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.513000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.513152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.931 [2024-11-17 11:30:31.513200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.931 qpair failed and we were unable to recover it. 00:36:06.931 [2024-11-17 11:30:31.513369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.513408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.513557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.513585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.513706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.513734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.513889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.513917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.514085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.514124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.514293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.514333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.514543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.514600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.514714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.514742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.514856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.514884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.515030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.515058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.515194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.515233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.515390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.515418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.515566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.515692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.515720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.515855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.515882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.516078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.516106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.516301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.516340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.516470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.516503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.516684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.516711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.516806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.516844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.516944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.516995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.517191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.517230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.517382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.517420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.517589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.517618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.517730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.517758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.517878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.517906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.518029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.518059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.518187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.518216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.518351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.518390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.518555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.518606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.518708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.518737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.518857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.518897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.519047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.519086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.519239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.519279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.519494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.519542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.519668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.519696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.519785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.519814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.519968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.932 [2024-11-17 11:30:31.520007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.932 qpair failed and we were unable to recover it. 00:36:06.932 [2024-11-17 11:30:31.520207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.520246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.520397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.520436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.520585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.520617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.520739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.520767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.520894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.520945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.521054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.521105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.521248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.521303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.521478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.521582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.521614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.521727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.521755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.521857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.521886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.522923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.522957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.523081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.523109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.523315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.523353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.523535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.523564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.523683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.523832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.523871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.524076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.524133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.524276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.524312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.524512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.524559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.524683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.524715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.524868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.524897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.524988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.525015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.525163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.525202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.525339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.525381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.525562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.525590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.525678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.525707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.933 qpair failed and we were unable to recover it. 00:36:06.933 [2024-11-17 11:30:31.525788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.933 [2024-11-17 11:30:31.525816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:06.934 [2024-11-17 11:30:31.525928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.934 [2024-11-17 11:30:31.525967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:06.934 [2024-11-17 11:30:31.526177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.934 [2024-11-17 11:30:31.526216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:06.934 [2024-11-17 11:30:31.526357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.934 [2024-11-17 11:30:31.526396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:06.934 [2024-11-17 11:30:31.526543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.934 [2024-11-17 11:30:31.526592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:06.934 [2024-11-17 11:30:31.526680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.934 [2024-11-17 11:30:31.526708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:06.934 [2024-11-17 11:30:31.526827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.934 [2024-11-17 11:30:31.526855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:06.934 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.526958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.526986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.527142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.527170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.527342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.527384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.527557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.527604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.527695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.527724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.527842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.527873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.528030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.528060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.528163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.528203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.528363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.528421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.528563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.528610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.528732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.528760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.528895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.528933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.529093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.529132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.529292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.529331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.529481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.529509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.529609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.529637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.529758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.529787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.529895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.529923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.530042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.530238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.530411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.530617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.530749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.530893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.530982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.531010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.531203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.531242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.531417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.220 [2024-11-17 11:30:31.531457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.220 qpair failed and we were unable to recover it. 00:36:07.220 [2024-11-17 11:30:31.531601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.531630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.531720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.531749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.531908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.531946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.532066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.532105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.532293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.532331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.532446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.532602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.532631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.532746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.532773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.532963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.532993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.533101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.533129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.533249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.533276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.533454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.533493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.533655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.533684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.533804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.533835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.533963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.533991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.534105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.534133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.534224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.534251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.534396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.534430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.534616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.534645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.534775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.534804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.534997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.535061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.535255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.535321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.535584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.535702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.535730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.535880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.535918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.536076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.536115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.536281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.536322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.536458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.536497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.536652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.536680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.536768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.536796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.536946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.536974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.537064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.537092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.537257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.537297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.537486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.537540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.537684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.537712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.537857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.537898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.538083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.221 [2024-11-17 11:30:31.538135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.221 qpair failed and we were unable to recover it. 00:36:07.221 [2024-11-17 11:30:31.538239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.538279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.538419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.538448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.538569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.538598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.538737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.538786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.538878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.538916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.539051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.539080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.539233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.539285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.539421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.539449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.539596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.539646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.539829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.539880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.540886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.540914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.541873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.541901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.542000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.542027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.542184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.542215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.542336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.542374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.542532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.542560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.542680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.542731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.542921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.542973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.543133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.543160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.543276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.543304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.543415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.543443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.543560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.543602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.543735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.543764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.543894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.222 [2024-11-17 11:30:31.543922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.222 qpair failed and we were unable to recover it. 00:36:07.222 [2024-11-17 11:30:31.544045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.544074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.544203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.544231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.544329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.544357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.544473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.544500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.544645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.544686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.544822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.544851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.545004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.545033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.545211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.545252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.545460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.545501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.545664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.545692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.545813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.545840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.545945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.545986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.546181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.546232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.546336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.546369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.546489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.546541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.546652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.546687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.546820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.546848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.546959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.546987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.547145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.547195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.547420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.547460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.547641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.547669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.547790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.547818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.548012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.548039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.548161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.548189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.548330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.548366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.548535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.548586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.548708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.548738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.548858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.548886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.549017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.549045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.549180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.549208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.549340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.549399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.549546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.549575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.549698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.549866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.549918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.550036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.550094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.550207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.550259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.550393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.550420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.223 [2024-11-17 11:30:31.550505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.223 [2024-11-17 11:30:31.550541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.223 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.550628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.550656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.550802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.550829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.550944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.550995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.551076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.551104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.551233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.551260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.551372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.551400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.551485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.551512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.551676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.551707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.551839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.551867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.552880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.552909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.553036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.553077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.553250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.553298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.553497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.553551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.553691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.553719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.553812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.553841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.553946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.553974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.554123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.554165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.554333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.554375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.554597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.554626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.554740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.554768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.554889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.554918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.555048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.555202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.555243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.555457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.555498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.555666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.224 [2024-11-17 11:30:31.555694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.224 qpair failed and we were unable to recover it. 00:36:07.224 [2024-11-17 11:30:31.555817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.555846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.555940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.555994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.556196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.556237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.556383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.556449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.556655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.556684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.556805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.557041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.557098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.557245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.557293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.557441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.557468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.557598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.557626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.557771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.557816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.557917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.557966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.558149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.558210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.558314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.558342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.558460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.558488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.558614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.558642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.558732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.558759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.558905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.558932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.559058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.559085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.559215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.559244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.559389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.559517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.559552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.559643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.559671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.559817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.559845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.560015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.560058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.560219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.560262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.560403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.560451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.560612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.560640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.560733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.560762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.560887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.560915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.561072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.561112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.561235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.561276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.561440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.561483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.561658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.561688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.561804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.561831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.561964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.562013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.562163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.562220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.225 [2024-11-17 11:30:31.562342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.225 [2024-11-17 11:30:31.562369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.225 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.562464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.562491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.562585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.562613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.562732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.562760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.562917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.562944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.563073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.563102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.563249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.563277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.563400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.563427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.563574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.563603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.563720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.563747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.563902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.563930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.564967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.564994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.565121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.565150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.565297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.565324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.565455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.565498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.565617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.565647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.565775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.565804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.565972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.566013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.566123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.566164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.566298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.566340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.566500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.566537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.566635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.566663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.566826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.566872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.567887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.567916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.568034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.568062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.568152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.568189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.226 [2024-11-17 11:30:31.568344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.226 [2024-11-17 11:30:31.568372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.226 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.568461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.568489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.568629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.568716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.568743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.568867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.568894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.569038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.569184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.569358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.569503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.569686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.569852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.569972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.570116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.570260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.570410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.570543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.570699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.570860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.570914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.571861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.571889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.572022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.572050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.572133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.572161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.572334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.572376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.572560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.572617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.572879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.572922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.573127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.573162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.573296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.573337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.573509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.573552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.573751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.573793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.573995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.574047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.574199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.574240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.574434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.574476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.574650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.227 [2024-11-17 11:30:31.574680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.227 qpair failed and we were unable to recover it. 00:36:07.227 [2024-11-17 11:30:31.574812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.574840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.574973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.575015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.575176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.575217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.575389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.575433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.575569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.575598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.575685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.575714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.575819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.575847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.576029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.576070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.576213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.576255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.576446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.576480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.576646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.576791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.576819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.576908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.576936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.577148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.577283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.577355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.577565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.577595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.577741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.577769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.577865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.577893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.578012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.578041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.578204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.578245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.578420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.578461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.578635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.578664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.578756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.578784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.578925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.578953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.579067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.579095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.579242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.579301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.579445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.579473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.579608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.579637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.579757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.579785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.579917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.579963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.580969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.580997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.581092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.228 [2024-11-17 11:30:31.581120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.228 qpair failed and we were unable to recover it. 00:36:07.228 [2024-11-17 11:30:31.581221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.581248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.581375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.581402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.581484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.581512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.581638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.581666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.581758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.581786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.581945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.581973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.582945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.582973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.583947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.583974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.584891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.584918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.585020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.585049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.585201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.585228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.585329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.585356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.585445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.585473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.585611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.229 [2024-11-17 11:30:31.585640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.229 qpair failed and we were unable to recover it. 00:36:07.229 [2024-11-17 11:30:31.585757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.585786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.585891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.585919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.586870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.586899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.587898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.587925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.588832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.588975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.589002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.589117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.589144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.589278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.589321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.589427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.589469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.589607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.589643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.589837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.589889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.590092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.590249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.590399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.590518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.590683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.590826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.590971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.591002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.591129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.230 [2024-11-17 11:30:31.591158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.230 qpair failed and we were unable to recover it. 00:36:07.230 [2024-11-17 11:30:31.591238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.591269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.591354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.591382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.591498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.591532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.591636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.591664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.591756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.591788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.591910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.591938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.592058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.592086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.592207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.592257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.592399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.592433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.592571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.592621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.592728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.592761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.592926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.592959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.593092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.593125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.593304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.593339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.593478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.593512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.593627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.593655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.593767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.593795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.593920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.593948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.594095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.594146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.594257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.594292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.594399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.594431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.594543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.594589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.594735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.594763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.594855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.594884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.595066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.595123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.595248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.595293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.595428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.595460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.595583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.595612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.595796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.595856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.596910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.596939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.597058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.597085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.231 [2024-11-17 11:30:31.597196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.231 [2024-11-17 11:30:31.597224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.231 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.597367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.597394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.597530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.597572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.597729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.597758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.597881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.597909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.598022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.598065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.598195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.598230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.598380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.598418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.598593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.598622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.598733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.598781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.598983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.599919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.599946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.600064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.600092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.600234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.600377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.600407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.600550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.600592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.600691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.600720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.600843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.600884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.601073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.601236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.601292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.601440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.601468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.601611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.601640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.601759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.601788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.601921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.601949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.602183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.602220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.602373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.602406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.602584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.602612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.602749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.602803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.603004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.603114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.603306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.603348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.603546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.603592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.603685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.603713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.232 qpair failed and we were unable to recover it. 00:36:07.232 [2024-11-17 11:30:31.603846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.232 [2024-11-17 11:30:31.603876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.603992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.604141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.604319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.604485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.604640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.604802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.604926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.604954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.605952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.605980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.606081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.606250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.606291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.606459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.606494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.606685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.606725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.606827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.606856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.607042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.607092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.607192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.607232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.607396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.607423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.607534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.607563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.607692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.607720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.607840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.607868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.608036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.608085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.608259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.608300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.608582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.608611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.608735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.608763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.608905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.608933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.609064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.609092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.609250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.609293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.609425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.609612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.609641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.609760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.609788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.609920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.609949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.610061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.233 [2024-11-17 11:30:31.610090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.233 qpair failed and we were unable to recover it. 00:36:07.233 [2024-11-17 11:30:31.610206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.610261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.610343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.610371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.610493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.610550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.610730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.610775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.610913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.610966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.611105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.611150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.611303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.611346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.611504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.611544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.611685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.611713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.611821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.611875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.612038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.612080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.612238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.612282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.612451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.612480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.612614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.612643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.612755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.612802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.612965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.612997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.613127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.613178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.613322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.613349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.613464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.613494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.613596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.613625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.613719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.613747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.613854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.613908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.614122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.614163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.614361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.614402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.614530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.614560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.614666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.614696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.614842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.614889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.614973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.615117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.615281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.615403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.615514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.615667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.234 qpair failed and we were unable to recover it. 00:36:07.234 [2024-11-17 11:30:31.615813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.234 [2024-11-17 11:30:31.615843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.615979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.616175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.616293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.616441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.616592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.616772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.616944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.616976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.617141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.617175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.617301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.617330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.617440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.617467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.617592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.617626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.617791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.617831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.618949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.618976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.619098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.619125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.619241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.619270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.619400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.619442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.619583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.619614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.619763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.619791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.619911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.619940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.620053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.620081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.620182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.620210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.620358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.620388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.620500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.620644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.620676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.620833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.620881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.621031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.621082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.621269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.621298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.621419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.621447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.621565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.621706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.621734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.235 [2024-11-17 11:30:31.621824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.235 [2024-11-17 11:30:31.621853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.235 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.621970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.621998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.622117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.622144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.622254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.622282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.622403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.622433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.622557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.622585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.622713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.622742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.622862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.622908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.623957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.623984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.624102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.624130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.624291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.624333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.624449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.624491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.624606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.624637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.624721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.624750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.624840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.624885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.625058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.625108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.625222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.625263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.625398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.625453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.625649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.625678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.625791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.625824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.625988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.626020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.626182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.626223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.626384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.626414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.626540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.626569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.626742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.626792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.626927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.626979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.627159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.627209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.627293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.627321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.236 qpair failed and we were unable to recover it. 00:36:07.236 [2024-11-17 11:30:31.627449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.236 [2024-11-17 11:30:31.627478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.627587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.627616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.627712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.627741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.627860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.627902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.628046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.628100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.628260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.628300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.628463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.628492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.628630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.628658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.628743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.628771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.628879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.628912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.629047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.629100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.629289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.629331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.629511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.629547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.629654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.629702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.629803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.629853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.630034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.630086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.630245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.630300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.630447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.630495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.630640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.630670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.630794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.630851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.631000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.631042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.631224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.631278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.631380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.631415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.631587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.631617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.631713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.631741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.631864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.631892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.632047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.632101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.632286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.632326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.632490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.632540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.632713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.632741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.632893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.632921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.633052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.633093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.633228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.633283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.633447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.633475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.633618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.633661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.633814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.633843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.237 [2024-11-17 11:30:31.633964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.237 [2024-11-17 11:30:31.633993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.237 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.634112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.634141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.634229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.634257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.634446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.634487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.634593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.634627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.634723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.634750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.634869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.634896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.635052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.635093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.635262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.635302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.635466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.635508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.635670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.635702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.635828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.635857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.635981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.636215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.636378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.636522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.636651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.636771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.636889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.636918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.637045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.637074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.637196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.637223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.637309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.637337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.637483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.637510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.637657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.637698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.637808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.637850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.638009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.638054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.638225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.638267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.638421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.638453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.638589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.638628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.638753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.638782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.638944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.638985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.639184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.639236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.639421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.238 [2024-11-17 11:30:31.639461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.238 qpair failed and we were unable to recover it. 00:36:07.238 [2024-11-17 11:30:31.639617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.639646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.639742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.639770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.639941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.639987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.640092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.640139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.640253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.640286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.640389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.640416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.640557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.640600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.640731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.640761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.640853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.640881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.641032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.641077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.641279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.641332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.641475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.641503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.641647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.641676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.641769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.641798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.641955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.641983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.642099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.642127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.642209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.642236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.642412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.642445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.642601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.642631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.642744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.642772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.642902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.642930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.643950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.643989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.644150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.644191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.644390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.644430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.644584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.239 [2024-11-17 11:30:31.644633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.239 qpair failed and we were unable to recover it. 00:36:07.239 [2024-11-17 11:30:31.644801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.644834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.644977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.645005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.645116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.645143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.645356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.645421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.645541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.645570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.645681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.645710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.645796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.645844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.646932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.646959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.647077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.647103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.647280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.647329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.647482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.647509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.647603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.647633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.647756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.647784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.647945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.647973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.648093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.648122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.648278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.648312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.648422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.648450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.648576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.648605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.648693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.648721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.648845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.648872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.649940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.649985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.650111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.650145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.240 qpair failed and we were unable to recover it. 00:36:07.240 [2024-11-17 11:30:31.650334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.240 [2024-11-17 11:30:31.650372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.650560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.650589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.650701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.650729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.650838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.650866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.650961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.650989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.651128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.651160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.651294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.651326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.651443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.651476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.651670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.651712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.651836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.651865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.651958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.651986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.652135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.652162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.652289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.652331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.652520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.652559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.652713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.652741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.652863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.652892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.653011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.653040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.653168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.653196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.653386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.653427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.653629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.653658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.653783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.653811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.653964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.653992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.654140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.654181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.654345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.654387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.654542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.654571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.654700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.654786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.654813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.654999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.655065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.655185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.655240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.655375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.655403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.655496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.655533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.655629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.655659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.655767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.655815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.655973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.656018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.656197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.656238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.241 [2024-11-17 11:30:31.656402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.241 [2024-11-17 11:30:31.656444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.241 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.656604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.656633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.656749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.656777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.656895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.656923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.657070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.657098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.657202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.657234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.657356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.657383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.657495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.657530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.657660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.657688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.657835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.657863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.658004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.658229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.658294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.658505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.658592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.658706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.658733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.658883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.658911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.659014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.659048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.659232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.659272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.659433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.659466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.659625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.659654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.659777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.659804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.659924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.659952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.660049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.660077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.660168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.660198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.660423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.660488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.660660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.660701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.660810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.660864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.660995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.661051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.661193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.661235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.661400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.661443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.661589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.661618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.661763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.661791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.661910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.661938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.662158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.662220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.662384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.662419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.662564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.662594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.662687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.242 [2024-11-17 11:30:31.662714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.242 qpair failed and we were unable to recover it. 00:36:07.242 [2024-11-17 11:30:31.662806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.662856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.662982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.663009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.663186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.663230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.663420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.663453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.663597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.663626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.663713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.663742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.663878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.663906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.664956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.664997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.665158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.665199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.665354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.665385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.665534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.665576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.665725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.665754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.665866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.665919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.666922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.666963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.667170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.667211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.667382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.667415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.667565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.667595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.667703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.667758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.243 qpair failed and we were unable to recover it. 00:36:07.243 [2024-11-17 11:30:31.667907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.243 [2024-11-17 11:30:31.667959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.668077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.668127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.668249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.668277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.668392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.668419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.668540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.668568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.668660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.668688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.668848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.668881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.669837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.669879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.670057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.670090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.670263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.670296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.670434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.670463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.670542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.670571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.670693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.670722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.670859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.670892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.671924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.671952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.672071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.672267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.672422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.672605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.672716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.672865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.672999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.673027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.673134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.673175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.673330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.673363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.244 qpair failed and we were unable to recover it. 00:36:07.244 [2024-11-17 11:30:31.673476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.244 [2024-11-17 11:30:31.673510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.673639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.673669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.673789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.673843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.673991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.674897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.674990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.675860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.675971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.676104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.676276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.676449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.676603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.676753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.676895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.676923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.677925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.677953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.678071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.678182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.678353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.678588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.678744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.678867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.678960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.245 [2024-11-17 11:30:31.679017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.245 qpair failed and we were unable to recover it. 00:36:07.245 [2024-11-17 11:30:31.679179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.679222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.679342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.679389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.679509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.679545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.679671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.679700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.679789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.679818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.679936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.680128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.680161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.680284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.680440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.680469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.680595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.680623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.680701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.680728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.680846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.680901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.681046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.681104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.681260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.681297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.681430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.681463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.681607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.681641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.681781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.681815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.681934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.681969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.682132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.682164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.682326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.682375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.682499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.682534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.682627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.682656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.682747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.682775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.682894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.682936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.683085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.683118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.683250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.683282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.683432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.683460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.683585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.683613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.683699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.683728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.683883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.683936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.684020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.684047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.684166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.684217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.684335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.684362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.684446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.246 [2024-11-17 11:30:31.684473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.246 qpair failed and we were unable to recover it. 00:36:07.246 [2024-11-17 11:30:31.684567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.684596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.684684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.684712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.684801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.684828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.684946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.684974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.685890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.685981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.686958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.686992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.687879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.687907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.688905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.688933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.689056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.689090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.689214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.689243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.689356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.689386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.247 qpair failed and we were unable to recover it. 00:36:07.247 [2024-11-17 11:30:31.689475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.247 [2024-11-17 11:30:31.689504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.689611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.689641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.689738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.689765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.689890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.689918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.690048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.690092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.690223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.690265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.690397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.690425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.690547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.690576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.690665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.690694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.690821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.690849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.691006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.691047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.691207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.691248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.691391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.691418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.691512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.691550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.691709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.691737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.691853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.691881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.692053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.692213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.692266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.692411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.692447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.692570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.692601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.692721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.692884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.692933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.693058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.693113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.693244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.693291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.693414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.693444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.693571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.693600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.693720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.693748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.693877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.693922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.694125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.694165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.694307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.694339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.694450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.694480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.694610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.248 [2024-11-17 11:30:31.694645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.248 qpair failed and we were unable to recover it. 00:36:07.248 [2024-11-17 11:30:31.694740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.694768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.694939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.694983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.695139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.695195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.695296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.695331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.695483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.695511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.695619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.695649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.695738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.695766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.695846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.695893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.696027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.696059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.696159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.696191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.696291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.696325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.696458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.696503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.696596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.696625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.696819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.696860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.697099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.697140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.697263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.697304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.697466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.697495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.697598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.697628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.697733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.697761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.697879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.697938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.698120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.698286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.698319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.698432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.698473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.698608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.698639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.698757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.698785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.698900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.698928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.699055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.699110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.699273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.699306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.699477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.699662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.699693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.699782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.699815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.699963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.700014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.700157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.700208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.700372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.700418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.700562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.700591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.700703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.700756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.700870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.700898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.249 [2024-11-17 11:30:31.700983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.249 [2024-11-17 11:30:31.701011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.249 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.701124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.701172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.701294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.701322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.701466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.701494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.701585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.701614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.701737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.701765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.701853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.701881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.702000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.702042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.702215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.702248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.702367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.702400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.702522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.702564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.702681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.702708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.702864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.702917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.703056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.703109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.703214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.703245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.703382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.703410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.703557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.703586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.703703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.703731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.703838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.703872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.704871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.704912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.705069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.705110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.705266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.705299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.705438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.705465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.705587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.705617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.705704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.705732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.705895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.705936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.706102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.706143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.706278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.706316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.706491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.706521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.706654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.706682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.706759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.706787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.706892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.706932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.250 qpair failed and we were unable to recover it. 00:36:07.250 [2024-11-17 11:30:31.707062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.250 [2024-11-17 11:30:31.707118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.707262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.707312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.707460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.707487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.707588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.707617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.707716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.707744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.707828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.707855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.707964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.707992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.708111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.708256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.708386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.708565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.708712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.708859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.708974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.709912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.709995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.710152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.710320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.710461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.710592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.710748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.710901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.710930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.711943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.711975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.712097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.712126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.712243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.251 [2024-11-17 11:30:31.712269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.251 qpair failed and we were unable to recover it. 00:36:07.251 [2024-11-17 11:30:31.712361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.712388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.712503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.712538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.712639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.712666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.712790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.712818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.712933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.712961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.713944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.713972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.714118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.714146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.714260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.714286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.714397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.714440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.714596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.714626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.714746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.714775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.714884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.714923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.715079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.715125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.715276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.715316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.715444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.715473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.715638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.715688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.715798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.715849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.715988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.716034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.716198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.716241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.716373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.716412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.716548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.716594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.716721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.716762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.716917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.716957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.717098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.717140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.717284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.717338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.717457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.717485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.717613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.717642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.717759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.717787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.717874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.717902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.718050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.718077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.718194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.718222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.718337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.252 [2024-11-17 11:30:31.718370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.252 qpair failed and we were unable to recover it. 00:36:07.252 [2024-11-17 11:30:31.718466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.718495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.718685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.718781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.718810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.718962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.718990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.719077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.719105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.719188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.719216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.719365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.719416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.719501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.719535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.719682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.719710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.719869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.719929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.720936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.721919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.721947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.722063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.722091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.722209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.722249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.722414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.722454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.722587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.722616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.722750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.722789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.722922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.722951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.723117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.723156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.723317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.723356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.723553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.253 [2024-11-17 11:30:31.723582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.253 qpair failed and we were unable to recover it. 00:36:07.253 [2024-11-17 11:30:31.723668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.723696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.723810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.723850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.724013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.724053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.724204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.724243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.724427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.724480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.724658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.724686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.724797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.724829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.724985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.725037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.725179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.725225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.725404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.725456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.725585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.725614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.725734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.725783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.725897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.725947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.726066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.726106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.726266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.726306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.726433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.726472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.726663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.726691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.726775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.726804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.726893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.726921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.727033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.727090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.727292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.727333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.727502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.727538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.727626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.727654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.727739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.727795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.727991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.728032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.728201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.728240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.728419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.728475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.728655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.728686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.728861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.728902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.729064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.729292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.729335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.729455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.729485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.729588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.729617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.729810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.729852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.730006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.730046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.730233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.730271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.730419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.730457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.254 [2024-11-17 11:30:31.730630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.254 [2024-11-17 11:30:31.730659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.254 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.730745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.730773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.730858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.730886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.730977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.731149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.731320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.731484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.731682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.731795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.731918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.731946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.732043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.732071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.732226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.732281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.732406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.732529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.732558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.732647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.732675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.732817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.732864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.733060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.733246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.733725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.733912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.733996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.734832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.734978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.735083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.735201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.735332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.735511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.735640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.735863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.735904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.736071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.736125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.736278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.736322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.255 [2024-11-17 11:30:31.736481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.255 [2024-11-17 11:30:31.736510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.255 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.736683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.736725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.736854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.736883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.737084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.737243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.737284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.737406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.737448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.737614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.737644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.737732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.737760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.737944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.737984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.738949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.738977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.739893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.739921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.740968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.740996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.741121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.741148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.741271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.741299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.741415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.741443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.741572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.741615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.741783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.741824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.741992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.742033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.742155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.742197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.742371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.742412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.256 qpair failed and we were unable to recover it. 00:36:07.256 [2024-11-17 11:30:31.742586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.256 [2024-11-17 11:30:31.742631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.742758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.742824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.742984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.743047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.743265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.743314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.743458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.743486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.743588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.743616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.743713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.743741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.743853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.743894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.744062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.744105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.744238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.744281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.744414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.744457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.744628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.744671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.744775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.744805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.744894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.744923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.745082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.745200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.745400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.745558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.745683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.745836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.745950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.746015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.746149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.746190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.746315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.746356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.746514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.746558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.746650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.746678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.746822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.746877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.746996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.747046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.747234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.747283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.747396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.747423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.747509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.747544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.747696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.747737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.747875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.747903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.748076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.748117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.257 qpair failed and we were unable to recover it. 00:36:07.257 [2024-11-17 11:30:31.748239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.257 [2024-11-17 11:30:31.748280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.748403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.748444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.748576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.748605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.748687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.748738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.748875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.748917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.749035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.749076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.749266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.749306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.749442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.749491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.749629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.749660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.749768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.749824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.749934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.749988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.750183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.750309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.750480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.750634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.750758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.750902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.750987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.751016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.751149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.751189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.751340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.751382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.751502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.751538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.751660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.751688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.751779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.751807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.751967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.752131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.752302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.752481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.752615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.752743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.752946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.752994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.753075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.753103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.753190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.753217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.753373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.753400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.753485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.753512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.753620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.258 [2024-11-17 11:30:31.753652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.258 qpair failed and we were unable to recover it. 00:36:07.258 [2024-11-17 11:30:31.753775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.753804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.753958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.753986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.754110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.754154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.754295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.754346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.754493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.754551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.754697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.754740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.754878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.754920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.755053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.755095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.755225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.755268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.755435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.755476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.755616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.755645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.755766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.755796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.755908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.755966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.756922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.756950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.757928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.757970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.758131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.758173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.758352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.758402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.758582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.758612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.758710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.758742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.758829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.259 [2024-11-17 11:30:31.758858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.259 qpair failed and we were unable to recover it. 00:36:07.259 [2024-11-17 11:30:31.758981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.759009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.759130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.759186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.759334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.759385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.759478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.759505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.759659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.759709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.759856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.759910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.760968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.760997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.761907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.761994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.762051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.762245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.762286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.762451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.762492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.762684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.762743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.762898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.762941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.763100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.763154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.763235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.763263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.763359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.763387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.763479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.763506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.763641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.763685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.763883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.260 [2024-11-17 11:30:31.764041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.260 [2024-11-17 11:30:31.764082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.260 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.764249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.764291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.764458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.764499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.764641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.764670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.764794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.764841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.765056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.765102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.765242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.765283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.765411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.765440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.765558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.765587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.765705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.765733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.765860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.765901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.766073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.766301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.766349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.766542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.766572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.766713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.766740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.766886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.766933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.767959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.767987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.768927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.768960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.769054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.769083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.769206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.769233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.769311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.261 [2024-11-17 11:30:31.769421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.261 [2024-11-17 11:30:31.769448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.261 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.769545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.769573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.769728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.769755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.769849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.769877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.769993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.770107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.770254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.770413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.770538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.770698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.770855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.770897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.771038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.771080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.771242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.771284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.771464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.771493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.771614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.771704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.771732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.771883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.771925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.772095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.772273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.772460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.772611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.772755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.772868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.772990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.773171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.773331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.773584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.773707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.773824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.773935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.773963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.774112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.774139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.774291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.774333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.774457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.774499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.774637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.774666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.774761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.774790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.774880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.774908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.775049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.775103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.262 qpair failed and we were unable to recover it. 00:36:07.262 [2024-11-17 11:30:31.775225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.262 [2024-11-17 11:30:31.775276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.775395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.775422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.775503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.775541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.775651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.775705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.775789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.775816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.775893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.775920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.776850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.776891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.777071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.777113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.777272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.777314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.777465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.777494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.777635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.777663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.777777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.777835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.777982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.778112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.778264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.778408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.778520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.778675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.778858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.778901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.779039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.779080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.779221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.779268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.779392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.779421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.779543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.779572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.779692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.779742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.779882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.779930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.263 qpair failed and we were unable to recover it. 00:36:07.263 [2024-11-17 11:30:31.780043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.263 [2024-11-17 11:30:31.780096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.780188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.780216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.780330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.780358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.780481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.780508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.780635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.780663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.780786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.780813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.780958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.780986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.781939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.781967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.782090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.782120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.782210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.782239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.782362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.782391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.782501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.782560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.782716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.782746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.782845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.782902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.783045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.783086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.783210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.783252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.783432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.783475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.783616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.783645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.783765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.783794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.783943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.783971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.784096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.784138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.784262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.784305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.784479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.784533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.784687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.784716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.784820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.784876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.785065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.785182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.785359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.785518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.785648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.264 [2024-11-17 11:30:31.785798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.264 qpair failed and we were unable to recover it. 00:36:07.264 [2024-11-17 11:30:31.785996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.786040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.786189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.786235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.786419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.786466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.786625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.786654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.786771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.786829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.786996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.787039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.787278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.787322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.787492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.787565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.787693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.787721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.787829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.787869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.788003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.788053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.788199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.788256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.788342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.788370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.788454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.788481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.788669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.788719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.788874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.788923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.789935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.789963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.790959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.790986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.791906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.791984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.792011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.792107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.792134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.792222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.792249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.265 [2024-11-17 11:30:31.792392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.265 qpair failed and we were unable to recover it. 00:36:07.265 [2024-11-17 11:30:31.792485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.792514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.792644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.792673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.792760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.792788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.792911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.792939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.793031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.793062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.793246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.793289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.793418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.793458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.793625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.793654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.793777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.793818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.793972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.794012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.794142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.794190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.794341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.794386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.794559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.794587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.794739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.794794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.794934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.794990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.795129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.795181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.795318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.795442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.795472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.795591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.795620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.795747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.795776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.795929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.795970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.796129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.796171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.796334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.796375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.796543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.796573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.796732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.796793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.796976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.797019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.797199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.797243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.797363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.797402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.797571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.797598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.797721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.797750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.797929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.797970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.798167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.798208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.798374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.798415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.798567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.798598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.798686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.798856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.798907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.799060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.799110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.799216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.799275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.799424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.799451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.799600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.799655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.799803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.799849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.799990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.800039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.266 [2024-11-17 11:30:31.800156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.266 [2024-11-17 11:30:31.800185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.266 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.800307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.800335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.800447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.800489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.800632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.800674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.800810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.800851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.800979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.801009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.801160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.801188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.801310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.801338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.801486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.801515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.801666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.801694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.801837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.801885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.802026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.802075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.802200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.802228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.802349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.802376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.802473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.802500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.802656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.802697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.802816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.802858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.803010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.803040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.803188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.803216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.803331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.803359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.803480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.803508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.803611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.803640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.803801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.803849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.804045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.804089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.804269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.804312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.804510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.804545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.804641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.804668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.804753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.804780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.804918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.804966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.805093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.805134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.805290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.805331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.805486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.805513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.805629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.805657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.805773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.805800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.805972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.806199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.806232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.806361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.806401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.806558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.806587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.806682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.806709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.267 [2024-11-17 11:30:31.806796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.267 [2024-11-17 11:30:31.806823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.267 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.806934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.806975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.807134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.807174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.807369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.807431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.807632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.807662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.807788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.807845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.808004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.808045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.808180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.808235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.808390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.808431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.808587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.808616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.808727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.808755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.808872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.808900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.809074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.809117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.809326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.809367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.809540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.809589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.809709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.809738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.809872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.809900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.810016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.810043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.810226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.810270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.268 [2024-11-17 11:30:31.810481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.268 [2024-11-17 11:30:31.810533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.268 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.810645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.810672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.810768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.810796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.810955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.810995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.811191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.811239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.811397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.811441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.811575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.811604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.811692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.811719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.811841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.811868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.811989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.812029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.812162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.812203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.812387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.812452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.812638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.812679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.812812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.812841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.812955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.813009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.813194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.813246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.813368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.813395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.813554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.813729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.813757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.813920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.813961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.814172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.814213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.814370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.814410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.814540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.814587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.814679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.814727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.814879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.814919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.815038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.815079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.815292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.815333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.815469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.815509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.815659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.815686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.815796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.815836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.815994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.816034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.816227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.816275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.816403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.816430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.816580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.816622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.816757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.816798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.816969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.817013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.817183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.817227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.817401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.817442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.269 qpair failed and we were unable to recover it. 00:36:07.269 [2024-11-17 11:30:31.817636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.269 [2024-11-17 11:30:31.817677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.817803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.817853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.818006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.818048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.818179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.818218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.818404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.818449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.818667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.818695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.818832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.818872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.819019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.819070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.819235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.819275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.819431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.819472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.819609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.819637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.819787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.819937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.819979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.820156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.820196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.820378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.820405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.820573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.820601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.820718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.820745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.820820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.820848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.820963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.820990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.821180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.821220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.821348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.821398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.821550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.821594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.821720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.821748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.821864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.821891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.821982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.822036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.822175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.822221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.822348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.822393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.822540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.822594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.822689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.822718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.822870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.822898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.823030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.823070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.823280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.823320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.823453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.823494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.823626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.823653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.823779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.823808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.823893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.823921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.824042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.824069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.824197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.270 [2024-11-17 11:30:31.824263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.270 qpair failed and we were unable to recover it. 00:36:07.270 [2024-11-17 11:30:31.824466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.824508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.824625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.824667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.824789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.824819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.824960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.824989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.825162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.825206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.825412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.825478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.825647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.825674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.825788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.825815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.825940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.825980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.826147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.826194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.826355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.826429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.826611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.826641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.826738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.826767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.826890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.826918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.827039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.827080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.827199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.827240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.827437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.827479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.827616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.827645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.827790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.827830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.827962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.828002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.828159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.828200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.828336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.828378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.828508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.828546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.828658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.828700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.828829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.828858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.829024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.829066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.829228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.829271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.829422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.829464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.829592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.829623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.829761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.829803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.829984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.830027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.830213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.830257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.830439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.830466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.830614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.830643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.830756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.830785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.830973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.831002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.831163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.831294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.271 [2024-11-17 11:30:31.831377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.271 qpair failed and we were unable to recover it. 00:36:07.271 [2024-11-17 11:30:31.831561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.831589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.831677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.831705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.831789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.831816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.831965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.832009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.832165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.832208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.832374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.832417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.832577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.832606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.832734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.832761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.832846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.832874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.833036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.833079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.833245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.833287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.833445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.833487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.833640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.833681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.833795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.833837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.833963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.833993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.834139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.834167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.834318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.834362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.834560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.834589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.834714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.834742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.834868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.834895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.835933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.835960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.836079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.836109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.836267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.836311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.836481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.836534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.836677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.836705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.836823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.272 [2024-11-17 11:30:31.836851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.272 qpair failed and we were unable to recover it. 00:36:07.272 [2024-11-17 11:30:31.836996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.837024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.837190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.837234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.837424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.837490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.837674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.837702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.837797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.837825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.837941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.837968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.838092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.838151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.838350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.838420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.838639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.838669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.838788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.838817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.838906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.838934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.839107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.839171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.839341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.839399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.839541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.839570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.839689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.839717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.839853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.839904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.840936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.840964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.841120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.841164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.841338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.841381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.841501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.841569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.841691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.841719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.841836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.841864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.841952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.841980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.842139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.842191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.842344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.842393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.842482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.842511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.842622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.842650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.842801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.842856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.843012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.843060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.843206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.843255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.843376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.843405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.843533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.273 [2024-11-17 11:30:31.843561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.273 qpair failed and we were unable to recover it. 00:36:07.273 [2024-11-17 11:30:31.843712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.843758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.843915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.843971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.844125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.844181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.844324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.844364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.844490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.844518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.844634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.844675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.844780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.844810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.844943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.844970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.845066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.845094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.845219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.845247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.845340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.845367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.845476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.845504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.845670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.845712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.845837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.845867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.846087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.846266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.846462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.846597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.846731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.846885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.846995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.847071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.847354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.847419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.847609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.847644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.847764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.847792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.847902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.847966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.848165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.848231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.848551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.848606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.274 qpair failed and we were unable to recover it. 00:36:07.274 [2024-11-17 11:30:31.848693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.274 [2024-11-17 11:30:31.848723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.848848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.848877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.848998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.849041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.849205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.849270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.849420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.849464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.849634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.849663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.849765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.849794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.849976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.850019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.850246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.850289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.850473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.850553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.850671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.850699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.850789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.850817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.850912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.850941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.851031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.851082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.851218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.851272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.851445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.851487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.851652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.851680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.851774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.851801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.851918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.851946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.852078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.852121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.852254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.852303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.852474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.852516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.852648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.852681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.852780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.852808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.575 qpair failed and we were unable to recover it. 00:36:07.575 [2024-11-17 11:30:31.852901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.575 [2024-11-17 11:30:31.852932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.853061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.853268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.853490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.853659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.853783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.853894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.853994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.854022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.854156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.854211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.854424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.854491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.854652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.854680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.854797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.854826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.854927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.854958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.855100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.855128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.855333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.855406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.855561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.855610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.855693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.855721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.855843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.855871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.856901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.856929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.857088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.857116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.857206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.857233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.857353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.857381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.857506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.857541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.857748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.857792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.857969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.858012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.858180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.858225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.858424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.858467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.858643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.858687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.858812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.858856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.858994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.859039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.859243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.859285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.859422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.576 [2024-11-17 11:30:31.859466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.576 qpair failed and we were unable to recover it. 00:36:07.576 [2024-11-17 11:30:31.859645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.859697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.859874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.859917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.860092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.860135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.860324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.860389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.860561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.860606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.860754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.860799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.860943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.860988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.861193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.861236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.861404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.861448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.861628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.861672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.861841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.861884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.862089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.862132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.862392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.862456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.862666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.862709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.862918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.862984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.863267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.863331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.863567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.863611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.863754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.863797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.863947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.863991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.864162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.864206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.864355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.864420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.864591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.864636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.864840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.864883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.865060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.865104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.865275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.865319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.865458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.865503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.865731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.865810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.866026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.866092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.866331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.866396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.866612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.866679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.866842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.866913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.867100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.867165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.867397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.867462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.867686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.867753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.868034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.868099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.868375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.868661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.868731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.869016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.577 [2024-11-17 11:30:31.869082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.577 qpair failed and we were unable to recover it. 00:36:07.577 [2024-11-17 11:30:31.869249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.869292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.869492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.869555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.869735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.869794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.870000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.870064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.870294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.870359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.870584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.870628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.870805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.870848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.871017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.871060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.871267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.871309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.871463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.871506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.871700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.871746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.871887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.871933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.872154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.872199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.872422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.872468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.872654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.872700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.872885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.872930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.873077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.873124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.873327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.873392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.873565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.873612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.873813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.873877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.874160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.874225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.874475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.874569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.874702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.874747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.874897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.874944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.875124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.875170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.875349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.875396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.875619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.875667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.875843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.875889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.876087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.876132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.876354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.876401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.876579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.876626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.876767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.876812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.876994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.877043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.877238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.877287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.877513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.877585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.877765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.877814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.878041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.878090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.578 qpair failed and we were unable to recover it. 00:36:07.578 [2024-11-17 11:30:31.878273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.578 [2024-11-17 11:30:31.878319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.878508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.878566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.878748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.878794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.879008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.879053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.879243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.879288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.879500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.879563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.879782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.879828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.880011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.880057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.880278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.880453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.880726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.880771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.880960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.881006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.881219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.881265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.881505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.881578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.881738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.881787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.881928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.881976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.882198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.882247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.882419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.882484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.882713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.882764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.882920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.882970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.883170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.883220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.883443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.883493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.883652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.883702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.883914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.883963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.884136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.884185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.884370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.884418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.884585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.884665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.884953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.885019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.885322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.885386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.885632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.885699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.885993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.886059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.886349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.886413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.886680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.579 [2024-11-17 11:30:31.886747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.579 qpair failed and we were unable to recover it. 00:36:07.579 [2024-11-17 11:30:31.886974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.887336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.887401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.887656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.887723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.887969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.888035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.888338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.888403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.888573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.888652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.888897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.888963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.889141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.889222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.889469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.889563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.889789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.889858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.890145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.890210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.890457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.890521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.890744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.890830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.891071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.891138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.891310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.891388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.891634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.891701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.891988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.892055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.892322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.892387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.892634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.892701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.892933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.892999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.893183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.893260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.893520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.893619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.893846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.893895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.894124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.894175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.894413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.894465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.894643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.894697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.894892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.894944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.895193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.895245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.895443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.895495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.895703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.895754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.895924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.895976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.896209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.896262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.896497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.896560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.896763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.896815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.897015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.897066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.897293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.897345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.897588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.897641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.580 qpair failed and we were unable to recover it. 00:36:07.580 [2024-11-17 11:30:31.897883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.580 [2024-11-17 11:30:31.897936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.898098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.898149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.898360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.898413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.898638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.898706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.898903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.898955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.899195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.899248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.899449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.899502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.899693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.899745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.899954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.900006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.900248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.900299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.900477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.900540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.900704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.900757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.901006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.901071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.901231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.901283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.901495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.901559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.901762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.901822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.901990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.902042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.902282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.902334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.902546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.902599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.902837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.902890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.903097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.903149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.903366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.903432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.903685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.903732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.903966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.904012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.904191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.904253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.904467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.904519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.904780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.904833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 403148 Killed "${NVMF_APP[@]}" "$@" 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.905028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.905096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:07.581 [2024-11-17 11:30:31.905259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.905346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:07.581 [2024-11-17 11:30:31.905573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:07.581 [2024-11-17 11:30:31.905628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:07.581 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.581 [2024-11-17 11:30:31.905832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.905885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.906087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.906139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.906403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.906751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.906804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.906987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.907038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.581 [2024-11-17 11:30:31.907250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.581 [2024-11-17 11:30:31.907304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.581 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.907581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.907635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.907856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.907908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.908159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.908211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.908427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.908488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.908714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.908768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.908958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.909040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.909229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403696 00:36:07.582 [2024-11-17 11:30:31.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403696 00:36:07.582 [2024-11-17 11:30:31.909511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.909580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403696 ']' 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:07.582 [2024-11-17 11:30:31.909796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:07.582 [2024-11-17 11:30:31.909865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:07.582 [2024-11-17 11:30:31.910079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.910136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 11:30:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.582 [2024-11-17 11:30:31.910341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.910397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.910622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.910680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.910876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.910933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.911192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.911253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.911460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.911517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.911785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.911844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.912065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.912122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.912377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.912431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.912675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.912729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.912941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.912998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.913285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.913351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.913643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.913687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.913860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.913904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.914078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.914122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.914299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.914343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.914510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.914573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.914750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.914794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.914971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.915014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.915178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.915224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.915392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.915439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.915637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.915684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.915870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.915916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.916093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.582 [2024-11-17 11:30:31.916161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.582 qpair failed and we were unable to recover it. 00:36:07.582 [2024-11-17 11:30:31.916349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.916416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.916622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.916669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.916812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.916880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.917082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.917141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.917391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.917463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.917789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.917891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.918182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.918251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.918463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.918519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.918793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.918866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.919122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.919188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.919376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.919432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.919664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.919734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.920009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.920076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.920317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.920373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.920571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.920653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.920910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.920975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.921247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.921313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.921582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.921650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.921916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.921981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.922253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.922319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.922498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.922618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.922890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.922956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.923223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.923287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.923474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.923541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.923835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.923901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.924110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.924174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.924411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.924466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.924675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.924769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.925004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.925069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.925339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.925395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.925635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.925703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.925961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.926027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.926294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.926370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.926587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.926654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.926920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.926986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.927237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.927303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.927566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.927632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.583 [2024-11-17 11:30:31.927870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.583 [2024-11-17 11:30:31.927935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.583 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.928200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.928269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.928465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.928538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.928745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.928812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.929074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.929140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.929344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.929390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.929569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.929640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.929867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.929932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.930153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.930235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.930499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.930576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.930748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.930804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.931013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.931068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.931298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.931353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.931564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.931626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.931865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.931925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.932147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.932206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.932473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.932556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.932795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.932862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.933088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.933147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.933376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.933442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.933736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.933803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.934032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.934094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.934345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.934405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.934575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.934636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.934873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.934933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.935174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.935233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.935484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.935580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.935775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.935836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.936080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.936140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.936392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.936452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.936663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.936725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.936912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.936976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.937200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.937259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.937547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.937609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.937782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.584 [2024-11-17 11:30:31.937843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.584 qpair failed and we were unable to recover it. 00:36:07.584 [2024-11-17 11:30:31.938022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.938091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.938330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.938390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.938581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.938642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.938837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.938900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.939122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.939182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.939343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.939426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.939657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.939719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.939942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.940002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.940228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.940288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.940472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.940547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.940758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.940819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.941089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.941149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.941374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.941438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.941710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.941771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.941978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.942228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.942288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.942555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.942639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.942859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.942927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.943248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.943315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.943571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.943633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.943831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.943891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.944120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.944181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.944417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.944477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.944677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.944739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.944927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.944990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.945229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.945290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.945518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.945796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.946050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.946112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.946357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.946419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.946609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.946674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.946908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.946969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.947198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.947259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.947447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.947508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.947792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.947852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.948087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.948147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.948427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.948486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.585 [2024-11-17 11:30:31.948736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.585 [2024-11-17 11:30:31.948796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.585 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.948979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.949041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.949225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.949287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.949516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.949606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.949846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.949910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.950201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.950266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.950515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.950608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.950853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.950914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.951158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.951218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.951484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.951583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.951816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.951877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.952061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.952120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.952317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.952376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.952603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.952665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.952902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.952963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.953218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.953421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.953481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.953766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.953832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.954026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.954094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.954305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.954370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.954682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.954749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.955042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.955102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.955313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.955374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.955576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.955638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.955900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.955968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.956090] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:07.586 [2024-11-17 11:30:31.956173] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:07.586 [2024-11-17 11:30:31.956213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.956540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.956606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.956864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.956929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.957218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.957281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.957558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.957627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.957829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.957890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.958158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.958219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.958462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.958541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.958751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.958818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.959119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.959184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.959393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.959454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.959734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.959797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.586 qpair failed and we were unable to recover it. 00:36:07.586 [2024-11-17 11:30:31.960047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.586 [2024-11-17 11:30:31.960112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.960336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.960396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.960656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.960738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.961066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.961131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.961392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.961452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.961769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.961847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.962139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.962204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.962460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.962521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.962704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.962764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.962969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.963037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.963269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.963338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.963594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.963657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.963907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.963973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.964180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.964245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.964467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.964542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.964796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.964865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.965220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.965444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.965505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.965780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.965846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.966130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.966211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.966444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.966501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.966751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.966828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.967054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.967118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.967298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.967364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.967631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.967666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.967778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.967812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.967919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.967991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.968221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.968292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.968507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.968588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.968731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.968765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.968907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.968958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.969242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.969319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.969582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.969617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.969723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.969757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.969897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.969931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.970163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.970197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.970490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.970583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.970762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.970796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.587 qpair failed and we were unable to recover it. 00:36:07.587 [2024-11-17 11:30:31.970937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.587 [2024-11-17 11:30:31.971006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.971263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.971344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.971571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.971606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.971750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.971784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.971889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.971924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.972106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.972158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.972458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.972539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.972702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.972743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.972935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.973001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.973251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.973327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.973557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.973785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.973850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.974053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.974121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.974424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.974458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.974611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.974646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.974760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.974801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.975075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.975140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.975372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.975433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.975582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.975617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.975733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.975767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.975977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.976058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.976341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.976402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.976624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.976659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.976796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.976873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.977042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.977116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.977304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.977366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.977617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.977652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.977794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.977829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.978075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.978135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.978327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.978388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.978616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.978651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.978796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.978832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.979028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.979083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.979356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.979416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.979615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.979651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.979820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.979855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.979963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.980039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.980195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.588 [2024-11-17 11:30:31.980255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.588 qpair failed and we were unable to recover it. 00:36:07.588 [2024-11-17 11:30:31.980473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.980547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.980677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.980711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.980885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.981093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.981127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.981337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.981398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.981627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.981662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.981780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.981814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.981956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.981990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.982201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.982275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.982499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.982598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.982773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.982807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.983027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.983093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.983423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.983482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.983662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.983696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.983949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.984028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.984260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.984519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.984560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.984707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.984742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.984930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.985020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.985223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.985287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.985494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.985535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.985643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.985677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.985783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.985817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.985969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.986003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.986232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.986290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.986494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.986543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.986651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.986684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.986821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.986856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.987039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.987115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.987274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.987334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.987513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.987590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.987687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.987721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.987850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.987884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.988062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.589 [2024-11-17 11:30:31.988123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.589 qpair failed and we were unable to recover it. 00:36:07.589 [2024-11-17 11:30:31.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.988361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.988584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.988619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.988732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.988767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.988884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.988918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.989129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.989164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.989383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.989443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.989636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.989671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.989812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.989874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.990121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.990155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.990345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.990405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.990541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.990577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.990681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.990715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.990904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.990965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.991206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.991281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.991486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.991520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.991737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.991777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.992001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.992078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.992253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.992309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.992557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.992591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.992705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.992739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.992950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.993002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.993198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.993272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.993472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.993540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.993676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.993710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.993816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.993850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.994105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.994179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.994390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.994446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.994699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.994759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.995034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.995108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.995313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.995340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.995423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.995449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.995548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.995575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.995668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.995694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.995779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.995806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.996003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.996030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.996110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.996137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.996221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.590 [2024-11-17 11:30:31.996248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.590 qpair failed and we were unable to recover it. 00:36:07.590 [2024-11-17 11:30:31.996335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.996362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.996554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.996581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.996659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.996685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.996875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.996902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.997911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.997937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.998901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.998981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:31.999907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:31.999988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.000971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.000997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.001136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.591 [2024-11-17 11:30:32.001162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.591 qpair failed and we were unable to recover it. 00:36:07.591 [2024-11-17 11:30:32.001271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.001297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.001412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.001438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.001537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.001564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.001648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.001674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.001756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.001782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.001901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.001927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.002944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.002970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.003938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.003964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.004970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.004997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.005951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.005977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.006060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.006085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.006165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.006191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.592 qpair failed and we were unable to recover it. 00:36:07.592 [2024-11-17 11:30:32.006303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.592 [2024-11-17 11:30:32.006329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.006439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.006465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.006569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.006597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.006688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.006715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.006810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.006836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.006950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.006975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.007904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.007929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.008891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.008982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.009911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.009939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.010052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.010077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.010182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.010207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.010317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.010343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.010421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.010446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.593 [2024-11-17 11:30:32.010538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.593 [2024-11-17 11:30:32.010564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.593 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.010651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.010676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.010769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.010795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.010875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.010901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.010995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.011922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.011948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.012916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.012942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.013924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.014834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.014977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.015003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.015088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.015114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.015197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.015223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.015324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.015351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.594 [2024-11-17 11:30:32.015441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.594 [2024-11-17 11:30:32.015468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.594 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.015609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.015636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.015747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.015774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.015925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.015951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.016946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.016971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.017934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.017960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.018910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.018936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.019892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.019918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.020021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.020047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.020126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.020152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.020291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.020317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.595 [2024-11-17 11:30:32.020408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.595 [2024-11-17 11:30:32.020436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.595 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.020550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.020577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.020690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.020716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.020788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.020814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.020897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.020923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.021946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.021972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.022846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.022884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.023860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.023975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.024968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.024994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.025077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.025104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.025186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.025212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.596 [2024-11-17 11:30:32.025294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.596 [2024-11-17 11:30:32.025320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.596 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.025425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.025450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.025567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.025595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.025708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.025735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.025814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.025840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.025982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.026899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.026926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.027945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.027971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.028928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.028955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.029931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.029957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.030071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.030099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.597 qpair failed and we were unable to recover it. 00:36:07.597 [2024-11-17 11:30:32.030211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.597 [2024-11-17 11:30:32.030237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.030316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.030342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.030451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.030477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.030609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.030648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.030745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.030773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.030895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.030921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.031893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.031919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.032831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.032980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.033913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.033939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.034024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.034051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.034141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.034167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.034244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.034271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.034345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.034371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.034458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.598 [2024-11-17 11:30:32.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.598 qpair failed and we were unable to recover it. 00:36:07.598 [2024-11-17 11:30:32.034570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.034596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.034676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.034703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.034798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.034824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.034932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.034958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:07.599 [2024-11-17 11:30:32.035477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.035877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.035903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.036933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.036959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.037948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.037974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.038862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.038887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.039003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.039028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.039108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.039134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.039217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.039243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.599 [2024-11-17 11:30:32.039328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.599 [2024-11-17 11:30:32.039354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.599 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.039440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.039466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.039560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.039587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.039698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.039724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.039808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.039835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.039960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.039987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.040892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.040917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.041877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.041990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.042945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.042971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.043864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.043891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.044005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.044031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.600 [2024-11-17 11:30:32.044142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.600 [2024-11-17 11:30:32.044168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.600 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.044946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.044974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.045918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.045944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.046860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.046886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.047922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.047948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.601 qpair failed and we were unable to recover it. 00:36:07.601 [2024-11-17 11:30:32.048938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.601 [2024-11-17 11:30:32.048964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.049866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.049892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.050916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.050943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.051957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.051985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.052897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.052923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.053001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.053027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.053108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.053134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.602 [2024-11-17 11:30:32.053248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.602 [2024-11-17 11:30:32.053274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.602 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.053356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.053381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.053509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.053555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.053638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.053664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.053784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.053810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.053899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.053926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.054863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.054889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.055916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.055942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.056921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.056947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.057914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.057997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.058111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.058137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.603 [2024-11-17 11:30:32.058210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.603 [2024-11-17 11:30:32.058235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.603 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.058321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.058348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.058463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.058489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.058627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.058667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.058756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.058784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.058900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.058927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.059889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.059994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.060882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.060908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.061900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.061926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.062964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.062990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.063080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.063222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.063249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.604 [2024-11-17 11:30:32.063361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.604 [2024-11-17 11:30:32.063389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.604 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.063469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.063495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.063589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.063616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.063704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.063730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.063808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.063944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.063971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.064863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.064889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.065931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.065960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.066966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.066992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.067142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.067248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.067378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.067516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.067670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.067843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.067973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.068001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.068088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.068114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.068200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.068226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.605 [2024-11-17 11:30:32.068313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.605 [2024-11-17 11:30:32.068339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.605 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.068455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.068481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.068568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.068596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.068711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.068738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.068880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.068906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.068979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.069901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.069927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.070928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.071855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.071882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.072909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.072935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.073077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.073103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.073191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.073219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.073355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.073395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.073518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.073553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.606 [2024-11-17 11:30:32.073664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.606 [2024-11-17 11:30:32.073691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.606 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.073805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.073831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.073957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.073983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.074890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.074917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.075864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.075981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.076931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.076958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.077906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.077933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.078011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.078037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.607 [2024-11-17 11:30:32.078120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.607 [2024-11-17 11:30:32.078147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.607 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.078261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.078289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.078375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.078401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.078539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.078567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.078686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.078712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.078793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.078824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.078939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.078965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.079940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.079968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.080907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.080933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.081970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.081997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.082906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.608 qpair failed and we were unable to recover it. 00:36:07.608 [2024-11-17 11:30:32.082979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.608 [2024-11-17 11:30:32.083005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:07.609 [2024-11-17 11:30:32.083778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:07.609 [2024-11-17 11:30:32.083781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:07.609 [2024-11-17 11:30:32.083806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 [2024-11-17 11:30:32.083811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.083821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:07.609 [2024-11-17 11:30:32.083914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.083939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.084896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.084922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:07.609 [2024-11-17 11:30:32.085459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:07.609 [2024-11-17 11:30:32.085555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:07.609 [2024-11-17 11:30:32.085509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:07.609 [2024-11-17 11:30:32.085678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.085906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.085932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.086942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.086968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.609 [2024-11-17 11:30:32.087070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.609 qpair failed and we were unable to recover it. 00:36:07.609 [2024-11-17 11:30:32.087165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.087893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.087977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.088970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.088997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.089906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.089994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.090906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.090934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.091042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.091068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.091156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.091183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.091272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.091311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.091433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.091466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.610 qpair failed and we were unable to recover it. 00:36:07.610 [2024-11-17 11:30:32.091570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.610 [2024-11-17 11:30:32.091597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.091682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.091708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.091789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.091815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.091901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.091927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.092895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.092974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.093954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.093981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.094905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.094932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.095049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.095189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.095291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.095457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.095588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.611 [2024-11-17 11:30:32.095698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.611 qpair failed and we were unable to recover it. 00:36:07.611 [2024-11-17 11:30:32.095785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.095813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.095906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.095933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.096890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.096972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.097902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.097931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.098880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.098991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.099938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.099964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.100049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.100077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.100156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.612 [2024-11-17 11:30:32.100183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.612 qpair failed and we were unable to recover it. 00:36:07.612 [2024-11-17 11:30:32.100265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.100376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.100487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.100609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.100718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.100839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.100951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.100977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.101892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.101919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.102931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.102957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.103918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.103944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.104046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.104074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.104166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.104194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.104335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.104363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.104447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.104473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.104559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.104586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.613 [2024-11-17 11:30:32.104678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.613 [2024-11-17 11:30:32.104705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.613 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.104790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.104817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.104895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.104921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.104997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.105920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.105999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.106928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.106954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.107906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.107993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.108895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.108976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.614 [2024-11-17 11:30:32.109003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.614 qpair failed and we were unable to recover it. 00:36:07.614 [2024-11-17 11:30:32.109110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.109903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.109929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.110934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.110960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.111968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.111996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.112913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.112940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.113018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.113046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.113145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.113184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.113279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.113306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.113391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.113417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.113497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.615 [2024-11-17 11:30:32.113528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.615 qpair failed and we were unable to recover it. 00:36:07.615 [2024-11-17 11:30:32.113620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.113646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.113739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.113766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.113843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.113869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.113955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.113984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.114896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.114922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.115962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.115989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.116924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.116951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.117034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.117061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.117174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.117201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.117284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.117310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.117389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.117415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.616 qpair failed and we were unable to recover it. 00:36:07.616 [2024-11-17 11:30:32.117505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.616 [2024-11-17 11:30:32.117541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.117628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.117656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.117741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.117860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.117886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.117968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.117996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.118928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.118954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.119944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.119970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.120912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.120939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.121917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.121992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.617 [2024-11-17 11:30:32.122018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.617 qpair failed and we were unable to recover it. 00:36:07.617 [2024-11-17 11:30:32.122098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.122898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.122925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.123943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.123972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.124908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.124993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.125839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.125865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.618 [2024-11-17 11:30:32.126902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.618 qpair failed and we were unable to recover it. 00:36:07.618 [2024-11-17 11:30:32.126980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.127875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.127901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.128912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.128938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.129952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.129978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.130991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.131131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.131310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.131453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.131716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.131865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.619 [2024-11-17 11:30:32.131973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.619 [2024-11-17 11:30:32.132000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.619 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.132914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.132942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.133867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.133893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.134922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.134949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.135926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.135996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.136022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.136109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.136138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.136217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.136247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.136350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.136390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.136509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.620 [2024-11-17 11:30:32.136546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.620 qpair failed and we were unable to recover it. 00:36:07.620 [2024-11-17 11:30:32.136622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.136649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.136734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.136761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.136839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.136865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.136982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.137951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.137977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.138927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.138955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.139949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.139975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.621 [2024-11-17 11:30:32.140083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.621 [2024-11-17 11:30:32.140109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.621 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.140923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.140950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.141865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.142912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.142939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.143892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.143919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.144939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.144964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.145045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.145072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.145183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.622 [2024-11-17 11:30:32.145210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.622 qpair failed and we were unable to recover it. 00:36:07.622 [2024-11-17 11:30:32.145301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.145329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.145410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.145437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.145532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.145558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.145637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.145772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.145798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.145944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.145975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.146939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.146968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.147935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.147961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.148193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.148411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.148534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.148654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.148760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.148873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.148988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.149970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.149997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.150926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.150953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.623 qpair failed and we were unable to recover it. 00:36:07.623 [2024-11-17 11:30:32.151035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.623 [2024-11-17 11:30:32.151061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.151892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.151918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.152953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.152979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.153932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.153958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.154911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.154937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.155892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.155981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.156007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.156096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.156123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.156199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.156224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.156344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.624 [2024-11-17 11:30:32.156371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.624 qpair failed and we were unable to recover it. 00:36:07.624 [2024-11-17 11:30:32.156457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.156481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.156585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.156611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.156691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.156716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.156802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.156834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.156916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.156941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.157929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.158965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.158991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.159982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.625 qpair failed and we were unable to recover it. 00:36:07.625 [2024-11-17 11:30:32.160961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.625 [2024-11-17 11:30:32.160988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.161962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.161989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.162862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.162978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.163892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.163919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.164890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.165892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.165974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.166000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.626 [2024-11-17 11:30:32.166108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.626 [2024-11-17 11:30:32.166135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.626 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.166964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.166990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.167895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.167921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.168937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.168964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.169973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.169998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.170896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.170980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.171006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.171080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.171106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.171218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.171244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.171321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.171347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.171435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.627 [2024-11-17 11:30:32.171460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.627 qpair failed and we were unable to recover it. 00:36:07.627 [2024-11-17 11:30:32.171552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.171581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.171665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.171693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.171776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.171802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.171900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.172938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.172964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.173900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.173985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.174906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.174999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.175920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.175947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.176039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.176066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.176150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.176178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.176264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.176290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.176371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.176397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.176479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.628 [2024-11-17 11:30:32.176505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.628 qpair failed and we were unable to recover it. 00:36:07.628 [2024-11-17 11:30:32.176600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.176626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.176713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.176740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.176821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.176847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.176935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.177906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.177989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.178971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.178997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.179958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.179985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.180942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.180968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.181075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.181102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.181187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.629 [2024-11-17 11:30:32.181213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.629 qpair failed and we were unable to recover it. 00:36:07.629 [2024-11-17 11:30:32.181302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.181329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.181408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.181435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.181547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.181574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.181660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.181686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.181768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.181794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.181870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.181896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.181990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.182927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.182954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.183937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.183963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.184957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.184985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.185923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.185949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.186047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.186076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.186153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.186183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.186275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.186302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.186384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.186411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.630 qpair failed and we were unable to recover it. 00:36:07.630 [2024-11-17 11:30:32.186503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.630 [2024-11-17 11:30:32.186539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.186615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.186645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.186746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.186773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.186858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.186889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.186973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.186999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.187907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.187990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.188965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.188991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.189071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.189103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.631 [2024-11-17 11:30:32.189183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.631 [2024-11-17 11:30:32.189210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.631 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.189311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.189358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.189474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.189510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.189629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.189661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.189746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.189775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.189864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.189984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.190020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.190113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.190141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.190242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.904 [2024-11-17 11:30:32.190268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.904 qpair failed and we were unable to recover it. 00:36:07.904 [2024-11-17 11:30:32.190358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.190385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.190480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.190507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.190608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.190635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.190722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.190748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.190842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.190868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.190969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.190995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.191906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.191985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.192938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.193961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.193988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.194071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.194096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.194188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.194214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.194296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.194323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.194440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.194519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.194551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.905 qpair failed and we were unable to recover it. 00:36:07.905 [2024-11-17 11:30:32.194631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.905 [2024-11-17 11:30:32.194657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.194737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.194762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.194839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.194869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.194959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.194986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.195937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.195963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.196891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.196917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.197932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.197958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.198922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.906 [2024-11-17 11:30:32.198948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.906 qpair failed and we were unable to recover it. 00:36:07.906 [2024-11-17 11:30:32.199071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.199898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.199923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.200910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.200997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.201809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.201836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.202011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:07.907 [2024-11-17 11:30:32.202177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.202309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.202426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:07.907 [2024-11-17 11:30:32.202556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.202686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.907 [2024-11-17 11:30:32.202828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.202938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.202965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.203059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.203088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.907 [2024-11-17 11:30:32.203198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.203224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.203317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.203345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.203421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.907 [2024-11-17 11:30:32.203447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.907 qpair failed and we were unable to recover it. 00:36:07.907 [2024-11-17 11:30:32.203558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.203586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.203671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.203697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.203774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.203799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.203910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.203935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.204965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.204991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.205963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.205993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.206835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.206976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.207900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.207926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.208007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.908 [2024-11-17 11:30:32.208033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.908 qpair failed and we were unable to recover it. 00:36:07.908 [2024-11-17 11:30:32.208151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.208293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.208451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.208597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.208723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.208826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.208943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.208969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.209896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.209988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.210916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.210941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.909 [2024-11-17 11:30:32.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.909 [2024-11-17 11:30:32.211882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.909 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.211972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.211998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.212913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.212939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.213884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.213912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.214898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.214925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.215940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.215967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.216048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.216073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.216152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.216178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.216290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.216316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.216415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.216442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.910 [2024-11-17 11:30:32.216545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.910 [2024-11-17 11:30:32.216571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.910 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.216653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.216679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.216795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.216827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.216905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.216930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.217873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.217913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.218902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.218979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.219945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.219971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.220972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.220997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.221086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.221118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.911 [2024-11-17 11:30:32.221216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.911 [2024-11-17 11:30:32.221243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.911 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.221328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.221360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.221453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.221479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.221563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.221590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.221677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.221706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.221787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.221814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.221899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.221927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.222904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.222931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.223937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.223964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.224913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.224938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.912 [2024-11-17 11:30:32.225759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.912 [2024-11-17 11:30:32.225785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.912 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.225873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.225899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.225976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.226943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.226973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.227957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.227984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.228898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.228924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.913 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.229062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.229103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.229219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.229272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.913 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.229377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.229406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.229491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.229519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.913 [2024-11-17 11:30:32.229632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 [2024-11-17 11:30:32.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.913 qpair failed and we were unable to recover it. 00:36:07.913 [2024-11-17 11:30:32.229753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.913 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.913 [2024-11-17 11:30:32.229779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.229868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.229895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.229979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.230952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.230978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.231906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.231933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.232893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.232924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.914 qpair failed and we were unable to recover it. 00:36:07.914 [2024-11-17 11:30:32.233852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.914 [2024-11-17 11:30:32.233878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.233954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.233983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.234897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.234924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.235907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.235933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.236883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.236978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.237888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.237924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.238030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.238066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.238235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.238271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.238375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.238403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.238496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.238529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.915 qpair failed and we were unable to recover it. 00:36:07.915 [2024-11-17 11:30:32.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.915 [2024-11-17 11:30:32.238657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.238756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.238783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.238859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.238891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.238977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.239911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.239989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.240901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.240981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.241926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.241955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.242899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.242987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.243017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.243145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.243173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.243258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.243283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.916 [2024-11-17 11:30:32.243367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.916 [2024-11-17 11:30:32.243404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.916 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.243493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.243529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.243620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.243646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.243751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.243778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.243868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.243895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.244906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.244995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.245954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.245979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.246893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.246922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.247902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.247927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.917 [2024-11-17 11:30:32.248034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.917 [2024-11-17 11:30:32.248061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.917 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.248930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.248957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.249929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.249955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.250914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.250940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.251934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.251962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.252063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.252089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.252195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.252220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.918 [2024-11-17 11:30:32.252316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.918 [2024-11-17 11:30:32.252344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.918 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.252423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.252455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.252559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.252595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.252680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.252709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.252789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.252814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.252932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.252958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.253866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.253892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.254878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.254994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.255926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.255953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.256929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.256955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.257036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.257062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.257254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.257280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.919 [2024-11-17 11:30:32.257369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.919 [2024-11-17 11:30:32.257396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.919 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.257505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.257540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.257621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.257647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.257729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.257760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.257868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.257894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.258898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.258988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.259843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.259873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.260970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.260998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.261899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.261925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.920 qpair failed and we were unable to recover it. 00:36:07.920 [2024-11-17 11:30:32.262008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.920 [2024-11-17 11:30:32.262037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.262893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.262919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.263909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.263996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.264995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.265814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.265841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.266068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.266185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.266305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.266415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.266601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.921 [2024-11-17 11:30:32.266727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.921 qpair failed and we were unable to recover it. 00:36:07.921 [2024-11-17 11:30:32.266819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.266846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.266955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.266981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.267904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.267984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.268962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.268987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.269935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.270899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.270986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.271018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.271117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.271143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.271222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.271250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.922 [2024-11-17 11:30:32.271337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.922 [2024-11-17 11:30:32.271363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.922 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.271443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.271470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.271557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.271584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.271659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.271686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.271771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.271796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.271895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.271925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.272900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.272926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.273893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.273987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.274934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.274960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.275040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.275066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.275147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.923 [2024-11-17 11:30:32.275173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.923 qpair failed and we were unable to recover it. 00:36:07.923 [2024-11-17 11:30:32.275282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.275308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.275389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.275415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.275533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.275560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.275647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.275673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.275754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.275780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.275874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.275902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.275992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.276929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.276954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.277090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.277322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.277446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.277563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.277670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 Malloc0 00:36:07.924 [2024-11-17 11:30:32.277783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.277876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.277904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.278017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.278124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.924 [2024-11-17 11:30:32.278239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.278348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:07.924 [2024-11-17 11:30:32.278374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.278458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.924 [2024-11-17 11:30:32.278578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.924 [2024-11-17 11:30:32.278690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.278796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.278928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.278955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.279072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.279099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.279211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.279237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.279337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.279376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.279471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.279502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.279607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.924 [2024-11-17 11:30:32.279640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.924 qpair failed and we were unable to recover it. 00:36:07.924 [2024-11-17 11:30:32.279725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.279753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.279843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.279871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.279954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.279981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.280883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.280911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281427] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.925 [2024-11-17 11:30:32.281482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.281948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.281975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.282900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.282994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.283940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.283966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.284051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.925 [2024-11-17 11:30:32.284076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.925 qpair failed and we were unable to recover it. 00:36:07.925 [2024-11-17 11:30:32.284168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.284278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.284430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.284575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.284692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.284799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.284929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.285903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.285929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.286919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.286997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.287916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.287941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.288030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.288057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.288168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.288197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.288281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.288392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.288418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.288608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.288635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.926 [2024-11-17 11:30:32.288719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.926 [2024-11-17 11:30:32.288745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.926 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.288833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.288860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.288943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.288973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.927 [2024-11-17 11:30:32.289669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:07.927 [2024-11-17 11:30:32.289813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.289897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.289924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.927 [2024-11-17 11:30:32.290131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.290848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.290969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.291879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.291918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.292006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.292034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.292148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.292175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.292260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.292286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.292397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.292423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.927 qpair failed and we were unable to recover it. 00:36:07.927 [2024-11-17 11:30:32.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.927 [2024-11-17 11:30:32.292560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.292641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.292668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.292745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.292770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.292859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.292884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.292960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.292985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.293095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.293120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.293231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.293256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.293375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.293490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.293517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.293732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.293771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.293893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.293921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.294913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.294994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.295888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.295914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.296958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.296984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.297068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.928 [2024-11-17 11:30:32.297093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.928 qpair failed and we were unable to recover it. 00:36:07.928 [2024-11-17 11:30:32.297177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.297203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.297302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.297332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.297423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.297450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.297567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.297655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.929 [2024-11-17 11:30:32.297683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.297762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.297788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:07.929 [2024-11-17 11:30:32.297875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.297902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.929 [2024-11-17 11:30:32.298036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.929 [2024-11-17 11:30:32.298062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.298939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.298965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.299969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.299995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.300910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.300936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.301013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.301039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.301126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.929 [2024-11-17 11:30:32.301154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.929 qpair failed and we were unable to recover it. 00:36:07.929 [2024-11-17 11:30:32.301269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.301297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.301379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.301405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.301487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.301514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.301614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.301640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.301838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.301864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.301945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.301971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.302893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.302921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.303907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.303932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.304964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.304990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.305106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.305136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.305216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.305242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.305357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.305385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.305469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.305496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.305588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.305616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.930 [2024-11-17 11:30:32.305704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 [2024-11-17 11:30:32.305732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.930 qpair failed and we were unable to recover it. 00:36:07.930 [2024-11-17 11:30:32.305820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.930 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:07.931 [2024-11-17 11:30:32.305846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.931 [2024-11-17 11:30:32.306044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.306961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.306994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.307878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.307904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39c4000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.308904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.308988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.309015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.309105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.309132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.309242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39b8000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.309342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.309381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f39bc000b90 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.309482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.931 [2024-11-17 11:30:32.309521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edeb40 with addr=10.0.0.2, port=4420 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 [2024-11-17 11:30:32.309700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.931 [2024-11-17 11:30:32.312305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.931 [2024-11-17 11:30:32.312419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.931 [2024-11-17 11:30:32.312447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.931 [2024-11-17 11:30:32.312462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.931 [2024-11-17 11:30:32.312474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.931 [2024-11-17 11:30:32.312510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.931 qpair failed and we were unable to recover it. 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.931 11:30:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 403173 00:36:07.931 [2024-11-17 11:30:32.322067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.931 [2024-11-17 11:30:32.322159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.931 [2024-11-17 11:30:32.322185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.322199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.322211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.322241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.332063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.332188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.332219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.332234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.332246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.332275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.342120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.342208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.342236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.342251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.342263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.342303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.352069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.352183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.352209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.352222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.352235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.352276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.362094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.362186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.362211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.362225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.362237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.362266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.372125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.372212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.372237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.372250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.372270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.372300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.382150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.382260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.382286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.382300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.382312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.382341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.392262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.392353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.392378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.392391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.392403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.392432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.402211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.402298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.402324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.402338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.402350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.402379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.412242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.412326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.412350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.412363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.412375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.412404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.422270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.422378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.422402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.422415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.422428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.422458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.932 [2024-11-17 11:30:32.432229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.932 [2024-11-17 11:30:32.432332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.932 [2024-11-17 11:30:32.432357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.932 [2024-11-17 11:30:32.432371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.932 [2024-11-17 11:30:32.432383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.932 [2024-11-17 11:30:32.432413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.932 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.442288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.442421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.442447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.442461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.442473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.442503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.452303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.452418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.452446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.452462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.452474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.452503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.462387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.462540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.462572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.462586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.462599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.462630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.472428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.472518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.472549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.472563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.472575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.472605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.482372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.482460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.482484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.482498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.482510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.482548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.492398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.492477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.492501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.492514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.492534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.492566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.502433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.502559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.502585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.502598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.502615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.502646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.512513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.512655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.512681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.512694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.512706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.512736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.522500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.522645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.522672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.522685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.522697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.522727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.532546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.532632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.532656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.532669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.532681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.532710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:07.933 [2024-11-17 11:30:32.542540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:07.933 [2024-11-17 11:30:32.542640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:07.933 [2024-11-17 11:30:32.542664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:07.933 [2024-11-17 11:30:32.542677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:07.933 [2024-11-17 11:30:32.542689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:07.933 [2024-11-17 11:30:32.542719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:07.933 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.552579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.552661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.552685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.552699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.552711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.552740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.562612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.562702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.562726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.562739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.562751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.562780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.572647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.572766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.572791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.572805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.572816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.572846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.582688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.582781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.582805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.582817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.582829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.582858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.592724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.592813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.592839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.592852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.592863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.592893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.602811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.602890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.602914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.602927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.602939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.602968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.612758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.612859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.612884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.612898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.612910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.612939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.622825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.622952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.622981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.622996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.623008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.623043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.632885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.632970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.632994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.633013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.633025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.633054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.642820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.642906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.642931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.642944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.642956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.642985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.652872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.652996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.653025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.653040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.653052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.653082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.662930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.193 [2024-11-17 11:30:32.663022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.193 [2024-11-17 11:30:32.663051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.193 [2024-11-17 11:30:32.663065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.193 [2024-11-17 11:30:32.663077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.193 [2024-11-17 11:30:32.663106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.193 qpair failed and we were unable to recover it. 00:36:08.193 [2024-11-17 11:30:32.672909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.672991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.673016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.673029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.673040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.673075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.682961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.683050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.683075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.683088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.683100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.683130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.692965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.693046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.693069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.693082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.693094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.693123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.703018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.703110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.703133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.703146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.703158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.703199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.713060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.713178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.713203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.713217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.713228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.713258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.723075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.723170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.723197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.723211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.723223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.723252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.733103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.733191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.733215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.733228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.733240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.733269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.743117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.743221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.743246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.743259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.743271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.743301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.753153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.753237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.753260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.753273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.753284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.753313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.763166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.763248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.763277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.763293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.763305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.763347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.773177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.773269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.773295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.773309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.773321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.773350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.783258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.783346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.783371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.783384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.783396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.783426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.793243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.194 [2024-11-17 11:30:32.793320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.194 [2024-11-17 11:30:32.793344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.194 [2024-11-17 11:30:32.793357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.194 [2024-11-17 11:30:32.793369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.194 [2024-11-17 11:30:32.793398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.194 qpair failed and we were unable to recover it. 00:36:08.194 [2024-11-17 11:30:32.803306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.195 [2024-11-17 11:30:32.803411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.195 [2024-11-17 11:30:32.803437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.195 [2024-11-17 11:30:32.803451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.195 [2024-11-17 11:30:32.803463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.195 [2024-11-17 11:30:32.803501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.195 qpair failed and we were unable to recover it. 00:36:08.195 [2024-11-17 11:30:32.813329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.195 [2024-11-17 11:30:32.813457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.195 [2024-11-17 11:30:32.813482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.195 [2024-11-17 11:30:32.813496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.195 [2024-11-17 11:30:32.813508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.195 [2024-11-17 11:30:32.813543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.195 qpair failed and we were unable to recover it. 00:36:08.195 [2024-11-17 11:30:32.823377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.195 [2024-11-17 11:30:32.823503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.195 [2024-11-17 11:30:32.823538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.195 [2024-11-17 11:30:32.823554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.195 [2024-11-17 11:30:32.823565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.195 [2024-11-17 11:30:32.823595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.195 qpair failed and we were unable to recover it. 00:36:08.195 [2024-11-17 11:30:32.833407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.195 [2024-11-17 11:30:32.833510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.195 [2024-11-17 11:30:32.833542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.195 [2024-11-17 11:30:32.833557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.195 [2024-11-17 11:30:32.833569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.195 [2024-11-17 11:30:32.833600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.195 qpair failed and we were unable to recover it. 00:36:08.195 [2024-11-17 11:30:32.843368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.195 [2024-11-17 11:30:32.843454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.195 [2024-11-17 11:30:32.843478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.195 [2024-11-17 11:30:32.843491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.195 [2024-11-17 11:30:32.843503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.195 [2024-11-17 11:30:32.843541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.195 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.853423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.853507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.853540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.853555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.853566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.853596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.863449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.863547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.863571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.863584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.863595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.863625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.873462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.873555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.873579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.873592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.873604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.873633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.883516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.883630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.883655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.883669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.883681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.883710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.893549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.893667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.893697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.893711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.893723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.893753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.903586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.903676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.903699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.903713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.903724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.903754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.913599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.913687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.913710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.913723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.913735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.913764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.923607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.923708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.454 [2024-11-17 11:30:32.923736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.454 [2024-11-17 11:30:32.923750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.454 [2024-11-17 11:30:32.923762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.454 [2024-11-17 11:30:32.923792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.454 qpair failed and we were unable to recover it. 00:36:08.454 [2024-11-17 11:30:32.933605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.454 [2024-11-17 11:30:32.933680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.933704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.933717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.933734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.933764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:32.943678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:32.943766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.943790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.943804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.943815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.943844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:32.953684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:32.953768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.953793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.953806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.953817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.953847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:32.963720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:32.963809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.963833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.963846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.963858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.963887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:32.973820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:32.973901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.973925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.973939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.973950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.973979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:32.983875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:32.983963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.983987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.984000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.984012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.984041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:32.993841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:32.993928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:32.993954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:32.993967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:32.993982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:32.994014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.003841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.003962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:33.003988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:33.004002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:33.004014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:33.004044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.013869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.013956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:33.013980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:33.013993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:33.014005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:33.014034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.023915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.024037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:33.024071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:33.024086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:33.024097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:33.024127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.034000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.034089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:33.034114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:33.034128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:33.034140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:33.034169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.043997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.044093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:33.044118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:33.044131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:33.044143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:33.044172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.053937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.054022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.455 [2024-11-17 11:30:33.054046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.455 [2024-11-17 11:30:33.054060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.455 [2024-11-17 11:30:33.054071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.455 [2024-11-17 11:30:33.054100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.455 qpair failed and we were unable to recover it. 00:36:08.455 [2024-11-17 11:30:33.064026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.455 [2024-11-17 11:30:33.064110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.456 [2024-11-17 11:30:33.064135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.456 [2024-11-17 11:30:33.064154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.456 [2024-11-17 11:30:33.064166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.456 [2024-11-17 11:30:33.064196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.456 qpair failed and we were unable to recover it. 00:36:08.456 [2024-11-17 11:30:33.074022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.456 [2024-11-17 11:30:33.074105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.456 [2024-11-17 11:30:33.074128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.456 [2024-11-17 11:30:33.074141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.456 [2024-11-17 11:30:33.074153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.456 [2024-11-17 11:30:33.074182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.456 qpair failed and we were unable to recover it. 00:36:08.456 [2024-11-17 11:30:33.084094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.456 [2024-11-17 11:30:33.084181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.456 [2024-11-17 11:30:33.084205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.456 [2024-11-17 11:30:33.084218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.456 [2024-11-17 11:30:33.084230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.456 [2024-11-17 11:30:33.084259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.456 qpair failed and we were unable to recover it. 00:36:08.456 [2024-11-17 11:30:33.094810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.456 [2024-11-17 11:30:33.094944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.456 [2024-11-17 11:30:33.094972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.456 [2024-11-17 11:30:33.094988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.456 [2024-11-17 11:30:33.094999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.456 [2024-11-17 11:30:33.095031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.456 qpair failed and we were unable to recover it. 00:36:08.456 [2024-11-17 11:30:33.104162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.456 [2024-11-17 11:30:33.104253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.456 [2024-11-17 11:30:33.104276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.456 [2024-11-17 11:30:33.104289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.456 [2024-11-17 11:30:33.104301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.456 [2024-11-17 11:30:33.104330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.456 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.114203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.114288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.114312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.114326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.114337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.114366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.124209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.124296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.124319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.124332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.124344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.124373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.134199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.134278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.134305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.134318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.134330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.134359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.144260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.144370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.144395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.144409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.144420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.144450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.154263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.154353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.154380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.154393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.154405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.154434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.164256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.164348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.164374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.164388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.164399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.164429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.174296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.174381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.174404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.174417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.174429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.174458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.184361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.184453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.184476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.184489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.184501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.184539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.194349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.194443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.194468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.194487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.194500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.194541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.204424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.204560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.204585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.204598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.204610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.204640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.214414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.214499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.214530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.214545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.214557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.715 [2024-11-17 11:30:33.214586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.715 qpair failed and we were unable to recover it. 00:36:08.715 [2024-11-17 11:30:33.224506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.715 [2024-11-17 11:30:33.224654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.715 [2024-11-17 11:30:33.224680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.715 [2024-11-17 11:30:33.224693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.715 [2024-11-17 11:30:33.224705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.224734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.234482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.234577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.234607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.234620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.234632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.234667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.244576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.244671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.244696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.244710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.244722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.244751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.254563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.254644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.254668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.254681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.254693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.254723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.264581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.264671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.264695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.264708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.264719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.264749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.274681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.274764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.274790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.274803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.274815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.274844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.284707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.284800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.284825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.284839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.284850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.284880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.294736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.294818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.294844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.294857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.294868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.294898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.304728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.304817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.304841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.304854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.304866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.304895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.314742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.314842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.314868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.314881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.314893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.314922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.324755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.324871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.324901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.324916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.324928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.324958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.334755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.334843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.334867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.334880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.334892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.334920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.344841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.344929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.344954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.344968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.344980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.716 [2024-11-17 11:30:33.345009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.716 qpair failed and we were unable to recover it. 00:36:08.716 [2024-11-17 11:30:33.354962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.716 [2024-11-17 11:30:33.355048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.716 [2024-11-17 11:30:33.355073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.716 [2024-11-17 11:30:33.355086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.716 [2024-11-17 11:30:33.355098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.717 [2024-11-17 11:30:33.355128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.717 qpair failed and we were unable to recover it. 00:36:08.717 [2024-11-17 11:30:33.364940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.717 [2024-11-17 11:30:33.365021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.717 [2024-11-17 11:30:33.365045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.717 [2024-11-17 11:30:33.365059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.717 [2024-11-17 11:30:33.365071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.717 [2024-11-17 11:30:33.365106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.717 qpair failed and we were unable to recover it. 00:36:08.975 [2024-11-17 11:30:33.374883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.975 [2024-11-17 11:30:33.374969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.975 [2024-11-17 11:30:33.374995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.975 [2024-11-17 11:30:33.375008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.975 [2024-11-17 11:30:33.375020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.975 [2024-11-17 11:30:33.375049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.975 qpair failed and we were unable to recover it. 00:36:08.975 [2024-11-17 11:30:33.384933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.975 [2024-11-17 11:30:33.385026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.975 [2024-11-17 11:30:33.385050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.975 [2024-11-17 11:30:33.385064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.975 [2024-11-17 11:30:33.385075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.975 [2024-11-17 11:30:33.385114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.975 qpair failed and we were unable to recover it. 00:36:08.975 [2024-11-17 11:30:33.395024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.975 [2024-11-17 11:30:33.395109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.975 [2024-11-17 11:30:33.395133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.975 [2024-11-17 11:30:33.395146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.975 [2024-11-17 11:30:33.395158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.975 [2024-11-17 11:30:33.395187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.975 qpair failed and we were unable to recover it. 00:36:08.975 [2024-11-17 11:30:33.404982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.975 [2024-11-17 11:30:33.405109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.975 [2024-11-17 11:30:33.405134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.975 [2024-11-17 11:30:33.405148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.975 [2024-11-17 11:30:33.405160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.975 [2024-11-17 11:30:33.405189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.975 qpair failed and we were unable to recover it. 00:36:08.975 [2024-11-17 11:30:33.414989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.975 [2024-11-17 11:30:33.415124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.975 [2024-11-17 11:30:33.415149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.975 [2024-11-17 11:30:33.415163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.415175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.415204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.425110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.425206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.425229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.425243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.425254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.425284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.435045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.435130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.435154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.435166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.435178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.435207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.445092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.445176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.445200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.445213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.445224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.445254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.455081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.455157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.455186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.455200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.455211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.455241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.465142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.465235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.465260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.465274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.465286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.465314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.475235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.475328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.475353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.475367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.475378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.475408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.485202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.485284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.485308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.485321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.485332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.485363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.495234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.495362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.495388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.495402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.495419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.495451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.505267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.505370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.505395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.505409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.505421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.505450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.515278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.515409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.515438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.515452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.515464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.515493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.525373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.525464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.525489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.525503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.525514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.525552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.535381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.535483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.535508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.976 [2024-11-17 11:30:33.535522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.976 [2024-11-17 11:30:33.535547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.976 [2024-11-17 11:30:33.535578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.976 qpair failed and we were unable to recover it. 00:36:08.976 [2024-11-17 11:30:33.545391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.976 [2024-11-17 11:30:33.545481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.976 [2024-11-17 11:30:33.545505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.545518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.545539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.545582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.555400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.555489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.555513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.555534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.555548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.555578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.565411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.565495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.565519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.565542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.565555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.565585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.575449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.575544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.575569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.575582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.575594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.575624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.585471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.585584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.585614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.585629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.585640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.585670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.595514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.595613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.595637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.595650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.595661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.595691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.605576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.605664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.605688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.605701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.605716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.605746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.615581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.615668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.615692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.615705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.615717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.615747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:08.977 [2024-11-17 11:30:33.625623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:08.977 [2024-11-17 11:30:33.625723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:08.977 [2024-11-17 11:30:33.625748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:08.977 [2024-11-17 11:30:33.625767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:08.977 [2024-11-17 11:30:33.625779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:08.977 [2024-11-17 11:30:33.625809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:08.977 qpair failed and we were unable to recover it. 00:36:09.236 [2024-11-17 11:30:33.635633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.236 [2024-11-17 11:30:33.635718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.236 [2024-11-17 11:30:33.635743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.236 [2024-11-17 11:30:33.635756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.236 [2024-11-17 11:30:33.635768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.236 [2024-11-17 11:30:33.635809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.236 qpair failed and we were unable to recover it. 00:36:09.236 [2024-11-17 11:30:33.645659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.236 [2024-11-17 11:30:33.645774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.236 [2024-11-17 11:30:33.645801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.236 [2024-11-17 11:30:33.645814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.236 [2024-11-17 11:30:33.645826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.236 [2024-11-17 11:30:33.645867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.236 qpair failed and we were unable to recover it. 00:36:09.236 [2024-11-17 11:30:33.655664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.236 [2024-11-17 11:30:33.655750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.236 [2024-11-17 11:30:33.655774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.236 [2024-11-17 11:30:33.655786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.236 [2024-11-17 11:30:33.655798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.236 [2024-11-17 11:30:33.655828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.236 qpair failed and we were unable to recover it. 00:36:09.236 [2024-11-17 11:30:33.665711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.236 [2024-11-17 11:30:33.665801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.236 [2024-11-17 11:30:33.665827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.236 [2024-11-17 11:30:33.665840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.236 [2024-11-17 11:30:33.665852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.665881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.675739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.675831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.675856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.675869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.675881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.675910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.685831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.685934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.685960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.685974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.685987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.686016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.695816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.695900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.695925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.695939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.695951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.695981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.705883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.705973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.705996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.706009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.706021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.706050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.715874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.715955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.715980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.715994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.716005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.716035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.725974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.726057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.726083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.726097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.726108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.726138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.735998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.736084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.736110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.736124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.736135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.736164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.745989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.746089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.746114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.746127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.746139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.746168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.755994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.756076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.756099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.756118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.756130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.756160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.766080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.766176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.766204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.766220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.766231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.766262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.776015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.776150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.776175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.776189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.776200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.776230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.786140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.786227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.786253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.786266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.786279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.786309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.237 qpair failed and we were unable to recover it. 00:36:09.237 [2024-11-17 11:30:33.796070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.237 [2024-11-17 11:30:33.796203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.237 [2024-11-17 11:30:33.796228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.237 [2024-11-17 11:30:33.796242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.237 [2024-11-17 11:30:33.796254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.237 [2024-11-17 11:30:33.796289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.806141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.806233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.806257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.806270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.806282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.806312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.816219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.816299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.816324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.816338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.816350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.816379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.826301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.826390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.826414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.826427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.826439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.826468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.836269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.836353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.836390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.836403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.836415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.836444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.846247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.846333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.846358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.846372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.846384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.846414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.856242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.856320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.856344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.856358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.856370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.856399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.866289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.866402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.866428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.866441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.866453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.866483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.876360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.876455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.876498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.876515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.876537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:09.238 [2024-11-17 11:30:33.876575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.238 [2024-11-17 11:30:33.886424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.238 [2024-11-17 11:30:33.886560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.238 [2024-11-17 11:30:33.886597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.238 [2024-11-17 11:30:33.886613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.238 [2024-11-17 11:30:33.886626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.238 [2024-11-17 11:30:33.886657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.238 qpair failed and we were unable to recover it. 00:36:09.496 [2024-11-17 11:30:33.896386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.496 [2024-11-17 11:30:33.896483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.896520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.896553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.896566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.896597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.906436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.906544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.906571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.906585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.906598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.906627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.916444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.916550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.916576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.916590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.916602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.916632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.926445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.926590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.926622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.926638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.926650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.926685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.936569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.936687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.936715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.936729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.936741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.936771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.946558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.946648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.946674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.946688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.946700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.946729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.956590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.956684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.956709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.956722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.956734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.956763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.966588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.966706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.966730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.966743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.966755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.966782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.976594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.976719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.976744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.976758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.976770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.976798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.986670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.986775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.986800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.986814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.986825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.986853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:33.996655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:33.996760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:33.996788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:33.996805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:33.996817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:33.996847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:34.006712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:34.006807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:34.006833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:34.006847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:34.006859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:34.006888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:34.016745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:34.016831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.497 [2024-11-17 11:30:34.016865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.497 [2024-11-17 11:30:34.016881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.497 [2024-11-17 11:30:34.016893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.497 [2024-11-17 11:30:34.016921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.497 qpair failed and we were unable to recover it. 00:36:09.497 [2024-11-17 11:30:34.026758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.497 [2024-11-17 11:30:34.026879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.026905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.026919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.026931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.026959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.036877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.036965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.036991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.037004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.037015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.037044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.046813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.046925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.046950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.046964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.046975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.047004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.056825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.056951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.056976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.056990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.057001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.057035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.066872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.066982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.067008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.067021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.067033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.067062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.077009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.077095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.077120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.077134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.077146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.077174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.086897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.087030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.087056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.087070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.087082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.087112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.097020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.097107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.097133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.097147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.097158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1edeb40 00:36:09.498 [2024-11-17 11:30:34.097186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.106986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.107074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.107103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.107118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.107130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.498 [2024-11-17 11:30:34.107176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.116992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.117125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.117152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.117166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.117179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.498 [2024-11-17 11:30:34.117209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.127048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.127133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.127158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.127172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.127184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.498 [2024-11-17 11:30:34.127214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.137031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.137110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.137135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.137148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.137159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.498 [2024-11-17 11:30:34.137189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.498 [2024-11-17 11:30:34.147166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.498 [2024-11-17 11:30:34.147261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.498 [2024-11-17 11:30:34.147292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.498 [2024-11-17 11:30:34.147307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.498 [2024-11-17 11:30:34.147318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.498 [2024-11-17 11:30:34.147348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.498 qpair failed and we were unable to recover it. 00:36:09.757 [2024-11-17 11:30:34.157150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.757 [2024-11-17 11:30:34.157242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.757 [2024-11-17 11:30:34.157272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.757 [2024-11-17 11:30:34.157286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.757 [2024-11-17 11:30:34.157298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.757 [2024-11-17 11:30:34.157327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-11-17 11:30:34.167123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.757 [2024-11-17 11:30:34.167203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.757 [2024-11-17 11:30:34.167228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.757 [2024-11-17 11:30:34.167242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.757 [2024-11-17 11:30:34.167254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.757 [2024-11-17 11:30:34.167284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-11-17 11:30:34.177150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.757 [2024-11-17 11:30:34.177235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.757 [2024-11-17 11:30:34.177259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.757 [2024-11-17 11:30:34.177273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.757 [2024-11-17 11:30:34.177285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.757 [2024-11-17 11:30:34.177314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-11-17 11:30:34.187178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.757 [2024-11-17 11:30:34.187270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.757 [2024-11-17 11:30:34.187295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.757 [2024-11-17 11:30:34.187308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.757 [2024-11-17 11:30:34.187325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.757 [2024-11-17 11:30:34.187355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-11-17 11:30:34.197238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.757 [2024-11-17 11:30:34.197327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.757 [2024-11-17 11:30:34.197356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.757 [2024-11-17 11:30:34.197370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.757 [2024-11-17 11:30:34.197382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.757 [2024-11-17 11:30:34.197412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.757 qpair failed and we were unable to recover it. 00:36:09.757 [2024-11-17 11:30:34.207245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.207380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.207406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.207420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.207432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.207461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.217285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.217369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.217394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.217408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.217419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.217448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.227334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.227425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.227450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.227464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.227475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.227505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.237323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.237425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.237451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.237465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.237477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.237507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.247363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.247470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.247496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.247509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.247522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.247562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.257406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.257485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.257509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.257522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.257544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.257574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.267435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.267540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.267569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.267583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.267595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.267626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.277441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.277520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.277557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.277572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.277584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.277627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.287489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.287581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.287609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.287622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.287634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.287664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.297556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.297684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.297709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.297722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.297734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.297765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.307573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.307658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.307683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.307696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.307708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.307750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.317566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.317662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.317691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.317707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.317725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.317757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.327589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.327707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.758 [2024-11-17 11:30:34.327734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.758 [2024-11-17 11:30:34.327748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.758 [2024-11-17 11:30:34.327760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.758 [2024-11-17 11:30:34.327791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.758 qpair failed and we were unable to recover it. 00:36:09.758 [2024-11-17 11:30:34.337602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.758 [2024-11-17 11:30:34.337711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.337737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.337750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.337762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.337792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.347663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.347757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.347785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.347798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.347810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.347839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.357767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.357853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.357879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.357893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.357904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.357934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.367708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.367836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.367863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.367876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.367888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.367917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.377745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.377873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.377901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.377915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.377927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.377957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.387885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.387975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.388000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.388014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.388026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.388055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.397773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.397864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.397888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.397901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.397912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.397943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:09.759 [2024-11-17 11:30:34.407790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:09.759 [2024-11-17 11:30:34.407882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:09.759 [2024-11-17 11:30:34.407906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:09.759 [2024-11-17 11:30:34.407919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:09.759 [2024-11-17 11:30:34.407931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:09.759 [2024-11-17 11:30:34.407962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:09.759 qpair failed and we were unable to recover it. 00:36:10.018 [2024-11-17 11:30:34.417825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.018 [2024-11-17 11:30:34.417918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.018 [2024-11-17 11:30:34.417944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.018 [2024-11-17 11:30:34.417958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.018 [2024-11-17 11:30:34.417970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.018 [2024-11-17 11:30:34.417999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.018 qpair failed and we were unable to recover it. 00:36:10.018 [2024-11-17 11:30:34.427864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.018 [2024-11-17 11:30:34.427956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.018 [2024-11-17 11:30:34.427981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.018 [2024-11-17 11:30:34.427993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.018 [2024-11-17 11:30:34.428005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.018 [2024-11-17 11:30:34.428035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.018 qpair failed and we were unable to recover it. 00:36:10.018 [2024-11-17 11:30:34.437884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.018 [2024-11-17 11:30:34.437973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.018 [2024-11-17 11:30:34.437997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.018 [2024-11-17 11:30:34.438010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.018 [2024-11-17 11:30:34.438022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.018 [2024-11-17 11:30:34.438052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.018 qpair failed and we were unable to recover it. 00:36:10.018 [2024-11-17 11:30:34.447917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.018 [2024-11-17 11:30:34.448002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.018 [2024-11-17 11:30:34.448026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.018 [2024-11-17 11:30:34.448048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.018 [2024-11-17 11:30:34.448061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.018 [2024-11-17 11:30:34.448091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.018 qpair failed and we were unable to recover it. 00:36:10.018 [2024-11-17 11:30:34.457989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.018 [2024-11-17 11:30:34.458080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.018 [2024-11-17 11:30:34.458104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.018 [2024-11-17 11:30:34.458117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.018 [2024-11-17 11:30:34.458130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.018 [2024-11-17 11:30:34.458159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.018 qpair failed and we were unable to recover it. 00:36:10.018 [2024-11-17 11:30:34.468014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.018 [2024-11-17 11:30:34.468106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.018 [2024-11-17 11:30:34.468131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.018 [2024-11-17 11:30:34.468144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.018 [2024-11-17 11:30:34.468156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.018 [2024-11-17 11:30:34.468185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.018 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.477998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.478082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.478107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.478120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.478132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.478161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.488016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.488097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.488121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.488133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.488145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.488180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.498091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.498212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.498241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.498255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.498266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.498296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.508140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.508234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.508262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.508276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.508288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.508318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.518146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.518228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.518252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.518265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.518277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.518306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.528185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.528268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.528292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.528305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.528317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.528347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.538168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.538263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.538291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.538305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.538317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.538346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.548213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.548304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.548330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.548343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.548355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.548384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.558222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.558308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.558332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.558345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.558357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.558386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.568269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.568354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.568379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.568392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.568404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.568433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.578383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.578473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.578501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.578531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.578547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.578578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.588337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.588425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.588450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.588464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.588475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.019 [2024-11-17 11:30:34.588507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.019 qpair failed and we were unable to recover it. 00:36:10.019 [2024-11-17 11:30:34.598328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.019 [2024-11-17 11:30:34.598406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.019 [2024-11-17 11:30:34.598430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.019 [2024-11-17 11:30:34.598443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.019 [2024-11-17 11:30:34.598455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.598485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.608361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.608444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.608468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.608482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.608494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.608531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.618431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.618514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.618551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.618566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.618579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.618616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.628537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.628639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.628664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.628678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.628690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.628720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.638561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.638650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.638676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.638690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.638701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.638731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.648505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.648596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.648621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.648634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.648646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.648676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.658515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.658607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.658631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.658644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.658656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.658685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.020 [2024-11-17 11:30:34.668655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.020 [2024-11-17 11:30:34.668755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.020 [2024-11-17 11:30:34.668781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.020 [2024-11-17 11:30:34.668795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.020 [2024-11-17 11:30:34.668806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.020 [2024-11-17 11:30:34.668836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.020 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.678662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.678760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.678786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.678800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.678811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.678841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.688648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.688765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.688792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.688805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.688817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.688849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.698620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.698702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.698727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.698740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.698752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.698782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.708656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.708740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.708775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.708790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.708802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.708832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.718711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.718791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.718816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.718830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.718842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.718871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.728702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.728781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.728805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.728818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.728829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.728858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.738761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.738841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.738864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.738877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.738889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.738919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.748778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.748906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.748931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.748945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.748962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.748992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.758849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.758930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.758958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.758973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.758985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.279 [2024-11-17 11:30:34.759014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.279 qpair failed and we were unable to recover it. 00:36:10.279 [2024-11-17 11:30:34.768854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.279 [2024-11-17 11:30:34.768945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.279 [2024-11-17 11:30:34.768974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.279 [2024-11-17 11:30:34.768988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.279 [2024-11-17 11:30:34.769000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.769029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.778874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.778966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.778991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.779005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.779016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.779046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.788885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.789006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.789031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.789045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.789057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.789086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.798942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.799026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.799051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.799065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.799076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.799106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.809033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.809163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.809189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.809202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.809214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.809244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.818944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.819028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.819052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.819066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.819078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.819107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.829045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.829165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.829194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.829213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.829226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.829258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.839065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.839163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.839198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.839214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.839227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.839258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.849022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.849103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.849128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.849142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.849154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.849183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.859162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.859252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.859280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.859293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.859305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.859334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.869105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.869193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.869217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.869231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.869242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.869272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.879209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.879312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.879337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.879351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.879368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.879398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.280 [2024-11-17 11:30:34.889170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.280 [2024-11-17 11:30:34.889254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.280 [2024-11-17 11:30:34.889279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.280 [2024-11-17 11:30:34.889292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.280 [2024-11-17 11:30:34.889305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.280 [2024-11-17 11:30:34.889345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.280 qpair failed and we were unable to recover it. 00:36:10.281 [2024-11-17 11:30:34.899182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.281 [2024-11-17 11:30:34.899266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.281 [2024-11-17 11:30:34.899291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.281 [2024-11-17 11:30:34.899305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.281 [2024-11-17 11:30:34.899317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.281 [2024-11-17 11:30:34.899347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.281 qpair failed and we were unable to recover it. 00:36:10.281 [2024-11-17 11:30:34.909253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.281 [2024-11-17 11:30:34.909380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.281 [2024-11-17 11:30:34.909405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.281 [2024-11-17 11:30:34.909419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.281 [2024-11-17 11:30:34.909431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.281 [2024-11-17 11:30:34.909460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.281 qpair failed and we were unable to recover it. 00:36:10.281 [2024-11-17 11:30:34.919327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.281 [2024-11-17 11:30:34.919404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.281 [2024-11-17 11:30:34.919429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.281 [2024-11-17 11:30:34.919443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.281 [2024-11-17 11:30:34.919454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.281 [2024-11-17 11:30:34.919483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.281 qpair failed and we were unable to recover it. 00:36:10.281 [2024-11-17 11:30:34.929392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.281 [2024-11-17 11:30:34.929479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.281 [2024-11-17 11:30:34.929503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.281 [2024-11-17 11:30:34.929516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.281 [2024-11-17 11:30:34.929540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.281 [2024-11-17 11:30:34.929573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.281 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.939302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.939428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.939453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.939467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.939479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.939508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.949349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.949468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.949494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.949507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.949520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.949561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.959432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.959515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.959548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.959562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.959574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.959604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.969360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.969453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.969477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.969490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.969502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.969540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.979406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.979549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.979575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.979588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.979599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.979629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.989506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.989611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.989641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.989655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.989666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.989696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:34.999474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:34.999563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:34.999588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:34.999601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:34.999613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:34.999643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:35.009507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:35.009601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:35.009625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:35.009644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:35.009657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:35.009687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:35.019545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:35.019634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:35.019658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:35.019671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.540 [2024-11-17 11:30:35.019683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.540 [2024-11-17 11:30:35.019712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.540 qpair failed and we were unable to recover it. 00:36:10.540 [2024-11-17 11:30:35.029571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.540 [2024-11-17 11:30:35.029659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.540 [2024-11-17 11:30:35.029682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.540 [2024-11-17 11:30:35.029695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.029707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.029737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.039584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.039690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.039716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.039729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.039741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.039770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.049684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.049768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.049791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.049804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.049816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.049850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.059651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.059736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.059760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.059773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.059785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.059814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.069674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.069761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.069785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.069798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.069811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.069840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.079823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.079954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.079979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.079994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.080005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.080035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.089803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.089899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.089925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.089938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.089949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.089979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.099784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.099882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.099907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.099921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.099932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.099962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.109883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.109968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.109991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.110004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.110016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.110045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.119800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.119881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.119904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.119918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.119930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.119959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.129958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.130040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.130065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.130078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.130090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.130119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.139954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.140044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.140075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.140090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.140101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.140131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.149892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.149978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.150002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.150016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.541 [2024-11-17 11:30:35.150027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.541 [2024-11-17 11:30:35.150057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.541 qpair failed and we were unable to recover it. 00:36:10.541 [2024-11-17 11:30:35.159964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.541 [2024-11-17 11:30:35.160053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.541 [2024-11-17 11:30:35.160081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.541 [2024-11-17 11:30:35.160095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.542 [2024-11-17 11:30:35.160107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.542 [2024-11-17 11:30:35.160137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.542 qpair failed and we were unable to recover it. 00:36:10.542 [2024-11-17 11:30:35.169951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.542 [2024-11-17 11:30:35.170036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.542 [2024-11-17 11:30:35.170061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.542 [2024-11-17 11:30:35.170074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.542 [2024-11-17 11:30:35.170086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.542 [2024-11-17 11:30:35.170117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.542 qpair failed and we were unable to recover it. 00:36:10.542 [2024-11-17 11:30:35.179980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.542 [2024-11-17 11:30:35.180064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.542 [2024-11-17 11:30:35.180088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.542 [2024-11-17 11:30:35.180102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.542 [2024-11-17 11:30:35.180113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.542 [2024-11-17 11:30:35.180161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.542 qpair failed and we were unable to recover it. 00:36:10.542 [2024-11-17 11:30:35.190169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.542 [2024-11-17 11:30:35.190262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.542 [2024-11-17 11:30:35.190287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.542 [2024-11-17 11:30:35.190300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.542 [2024-11-17 11:30:35.190312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.542 [2024-11-17 11:30:35.190341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.542 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.200066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.200195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.200221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.200238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.200250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.200282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.210148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.210228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.210253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.210267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.210279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.210308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.220073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.220159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.220182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.220195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.220206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.220236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.230116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.230217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.230243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.230257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.230268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.230298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.240159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.240284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.240310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.240324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.240336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.240379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.250186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.250267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.250291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.250304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.250316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.250345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.260195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.260305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.260331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.260345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.260357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.260387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.270249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.270374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.801 [2024-11-17 11:30:35.270404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.801 [2024-11-17 11:30:35.270419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.801 [2024-11-17 11:30:35.270431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.801 [2024-11-17 11:30:35.270460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.801 qpair failed and we were unable to recover it. 00:36:10.801 [2024-11-17 11:30:35.280287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.801 [2024-11-17 11:30:35.280413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.280438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.280452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.280463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.280493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.290276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.290410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.290435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.290449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.290461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.290490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.300334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.300449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.300474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.300487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.300499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.300537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.310369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.310461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.310490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.310504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.310521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.310562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.320406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.320486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.320512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.320532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.320545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.320575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.330390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.330472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.330496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.330508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.330520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.330561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.340468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.340563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.340587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.340601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.340613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.340643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.350497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.350742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.350768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.350782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.350794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.350824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.360498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.360641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.360667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.360681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.360693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.360723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.370616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.370745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.370774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.370789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.370801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.370830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.380554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.380633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.380658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.380671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.380682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.380715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.390672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.390764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.390789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.390802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.390814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.390844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.400637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.400745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.802 [2024-11-17 11:30:35.400776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.802 [2024-11-17 11:30:35.400790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.802 [2024-11-17 11:30:35.400802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.802 [2024-11-17 11:30:35.400832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.802 qpair failed and we were unable to recover it. 00:36:10.802 [2024-11-17 11:30:35.410633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.802 [2024-11-17 11:30:35.410721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-11-17 11:30:35.410745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-11-17 11:30:35.410758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-11-17 11:30:35.410771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.803 [2024-11-17 11:30:35.410814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.803 qpair failed and we were unable to recover it. 00:36:10.803 [2024-11-17 11:30:35.420653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.803 [2024-11-17 11:30:35.420751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-11-17 11:30:35.420781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-11-17 11:30:35.420796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-11-17 11:30:35.420808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.803 [2024-11-17 11:30:35.420839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.803 qpair failed and we were unable to recover it. 00:36:10.803 [2024-11-17 11:30:35.430708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.803 [2024-11-17 11:30:35.430799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-11-17 11:30:35.430823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-11-17 11:30:35.430836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-11-17 11:30:35.430848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.803 [2024-11-17 11:30:35.430878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.803 qpair failed and we were unable to recover it. 00:36:10.803 [2024-11-17 11:30:35.440744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.803 [2024-11-17 11:30:35.440828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-11-17 11:30:35.440855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-11-17 11:30:35.440874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-11-17 11:30:35.440887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.803 [2024-11-17 11:30:35.440918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.803 qpair failed and we were unable to recover it. 00:36:10.803 [2024-11-17 11:30:35.450845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:10.803 [2024-11-17 11:30:35.450928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:10.803 [2024-11-17 11:30:35.450954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:10.803 [2024-11-17 11:30:35.450968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:10.803 [2024-11-17 11:30:35.450980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:10.803 [2024-11-17 11:30:35.451010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:10.803 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.460760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.460895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.460922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.460936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.460947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.460978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.470836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.470950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.470976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.470990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.471002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.471034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.480819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.480901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.480926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.480939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.480951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.480980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.490940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.491027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.491053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.491067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.491079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.491109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.500938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.501034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.501057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.501071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.501083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.501113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.510930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.511015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.511039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.511052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.511064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.511094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.520923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.521005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.521029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.521043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.521055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.521097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.531010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.531113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.108 [2024-11-17 11:30:35.531139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.108 [2024-11-17 11:30:35.531153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.108 [2024-11-17 11:30:35.531165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.108 [2024-11-17 11:30:35.531195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.108 qpair failed and we were unable to recover it. 00:36:11.108 [2024-11-17 11:30:35.541014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.108 [2024-11-17 11:30:35.541098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.541121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.541134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.541146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.541175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.551082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.551179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.551205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.551219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.551231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.551260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.561146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.561232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.561257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.561270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.561281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.561312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.571089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.571170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.571195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.571215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.571227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.571257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.581131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.581246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.581271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.581285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.581297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.581326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.591200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.591291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.591319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.591333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.591345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.591374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.601219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.601337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.601362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.601375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.601387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.601416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.611186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.611296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.611321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.611334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.611346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.611381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.621242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.621331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.621356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.621369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.621381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.621411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.631299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.631392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.631416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.631429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.631440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.631470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.641283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.641366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.641390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.641403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.641415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.641444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.651303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.651386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.651410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.651423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.651435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.651463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.661375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.661494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.109 [2024-11-17 11:30:35.661520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.109 [2024-11-17 11:30:35.661543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.109 [2024-11-17 11:30:35.661555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.109 [2024-11-17 11:30:35.661585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.109 qpair failed and we were unable to recover it. 00:36:11.109 [2024-11-17 11:30:35.671404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.109 [2024-11-17 11:30:35.671511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.110 [2024-11-17 11:30:35.671545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.110 [2024-11-17 11:30:35.671560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.110 [2024-11-17 11:30:35.671571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.110 [2024-11-17 11:30:35.671614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-11-17 11:30:35.681383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.110 [2024-11-17 11:30:35.681477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.110 [2024-11-17 11:30:35.681502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.110 [2024-11-17 11:30:35.681515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.110 [2024-11-17 11:30:35.681545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.110 [2024-11-17 11:30:35.681590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-11-17 11:30:35.691496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.110 [2024-11-17 11:30:35.691596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.110 [2024-11-17 11:30:35.691624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.110 [2024-11-17 11:30:35.691638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.110 [2024-11-17 11:30:35.691650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.110 [2024-11-17 11:30:35.691682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-11-17 11:30:35.701446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.110 [2024-11-17 11:30:35.701546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.110 [2024-11-17 11:30:35.701577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.110 [2024-11-17 11:30:35.701591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.110 [2024-11-17 11:30:35.701602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.110 [2024-11-17 11:30:35.701634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-11-17 11:30:35.711498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.110 [2024-11-17 11:30:35.711598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.110 [2024-11-17 11:30:35.711625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.110 [2024-11-17 11:30:35.711638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.110 [2024-11-17 11:30:35.711650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.110 [2024-11-17 11:30:35.711680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.110 [2024-11-17 11:30:35.721589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.110 [2024-11-17 11:30:35.721680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.110 [2024-11-17 11:30:35.721710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.110 [2024-11-17 11:30:35.721730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.110 [2024-11-17 11:30:35.721742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.110 [2024-11-17 11:30:35.721774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.110 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.731522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.731613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.731639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.731653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.731665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.731707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.741562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.741649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.741678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.741693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.741704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.741740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.751591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.751681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.751705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.751718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.751729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.751759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.761646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.761730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.761754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.761767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.761779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.761809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.771660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.771784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.771810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.771836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.771848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.771877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.781667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.781763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.781790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.781803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.781815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.781844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.791736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.791833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.791862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.791877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.791890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.791920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.801737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.801835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.801862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.801875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.801887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.801916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.811873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.811977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.812003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.812017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.812029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.812058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.821797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.821883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.821908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.821922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.821934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.821963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.831861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.831950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.416 [2024-11-17 11:30:35.831981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.416 [2024-11-17 11:30:35.831995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.416 [2024-11-17 11:30:35.832007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.416 [2024-11-17 11:30:35.832037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.416 qpair failed and we were unable to recover it. 00:36:11.416 [2024-11-17 11:30:35.841967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.416 [2024-11-17 11:30:35.842052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.842078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.842092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.842104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.842133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.851898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.851977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.852002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.852015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.852026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.852056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.862036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.862120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.862146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.862160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.862171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.862201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.871942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.872029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.872053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.872066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.872084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.872114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.881986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.882070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.882093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.882107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.882118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.882148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.892029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.892109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.892136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.892150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.892162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.892192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.902019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.902103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.902127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.902140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.902152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.902182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.912106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.912194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.912218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.912232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.912244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.912286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.922083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.922171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.922197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.922211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.922223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.922253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.932171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.932267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.932291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.932304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.932316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.932345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.942155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.942240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.942267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.942281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.942292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.942323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.952259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.952399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.952426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.952440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.952452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.417 [2024-11-17 11:30:35.952481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.417 qpair failed and we were unable to recover it. 00:36:11.417 [2024-11-17 11:30:35.962241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.417 [2024-11-17 11:30:35.962326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.417 [2024-11-17 11:30:35.962360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.417 [2024-11-17 11:30:35.962376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.417 [2024-11-17 11:30:35.962388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:35.962419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:35.972227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:35.972311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:35.972336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:35.972349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:35.972361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:35.972392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:35.982308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:35.982405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:35.982431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:35.982446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:35.982458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:35.982488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:35.992294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:35.992379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:35.992403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:35.992416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:35.992427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:35.992457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:36.002310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:36.002428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:36.002454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:36.002474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:36.002486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:36.002515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:36.012364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:36.012447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:36.012471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:36.012485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:36.012497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:36.012537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:36.022403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:36.022482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:36.022506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:36.022540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:36.022553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:36.022584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:36.032420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:36.032505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:36.032544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:36.032559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:36.032571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:36.032601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.418 [2024-11-17 11:30:36.042494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.418 [2024-11-17 11:30:36.042613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.418 [2024-11-17 11:30:36.042640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.418 [2024-11-17 11:30:36.042654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.418 [2024-11-17 11:30:36.042666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.418 [2024-11-17 11:30:36.042707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.418 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.052473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.052574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.052600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.052614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.052626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.052656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.062551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.062636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.062660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.062673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.062685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.062715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.072568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.072659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.072683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.072696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.072708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.072738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.082567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.082694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.082720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.082734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.082746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.082775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.092590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.092676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.092700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.092713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.092725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.092755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.102648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.102754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.102780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.102794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.102805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.102835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.112687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.112777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.112800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.112814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.112825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.112856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.122764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.122897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.122923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.122936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.122948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.122978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.132696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.132776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.132800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.132819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.132831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.132861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.701 [2024-11-17 11:30:36.142733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.701 [2024-11-17 11:30:36.142816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.701 [2024-11-17 11:30:36.142853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.701 [2024-11-17 11:30:36.142868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.701 [2024-11-17 11:30:36.142879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.701 [2024-11-17 11:30:36.142922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.701 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.152871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.152964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.152992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.153006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.153018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.153047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.162802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.162891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.162920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.162933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.162945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.162975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.172871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.172956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.172981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.172994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.173005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.173041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.182837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.182965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.182992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.183014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.183036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.183079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.192901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.193010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.193038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.193052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.193064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.193095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.202967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.203075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.203102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.203116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.203128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.203158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.212988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.213078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.213103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.213116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.213128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.213158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.222955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.223037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.223062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.223075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.223087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.223116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.233047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.233144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.233169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.233183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.233195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.233224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.243053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.243140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.243164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.243177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.243188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.243218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.253059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.253161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.253187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.253200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.253212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.253241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.263064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.263150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.263181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.263196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.263208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.263238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.702 [2024-11-17 11:30:36.273111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.702 [2024-11-17 11:30:36.273200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.702 [2024-11-17 11:30:36.273225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.702 [2024-11-17 11:30:36.273239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.702 [2024-11-17 11:30:36.273251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.702 [2024-11-17 11:30:36.273280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.702 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.283153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.283280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.283307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.283321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.283333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.283363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.293222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.293325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.293350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.293364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.293376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.293405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.303179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.303262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.303287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.303301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.303318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.303349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.313262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.313369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.313395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.313409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.313420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.313450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.323311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.323403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.323427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.323440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.323452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.323481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.333274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.333360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.333385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.333398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.333410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.333441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.343321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.343406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.343432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.343446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.343457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.343486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.703 [2024-11-17 11:30:36.353340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.703 [2024-11-17 11:30:36.353431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.703 [2024-11-17 11:30:36.353457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.703 [2024-11-17 11:30:36.353472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.703 [2024-11-17 11:30:36.353483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.703 [2024-11-17 11:30:36.353514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.703 qpair failed and we were unable to recover it. 00:36:11.962 [2024-11-17 11:30:36.363479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.962 [2024-11-17 11:30:36.363571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.962 [2024-11-17 11:30:36.363600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.962 [2024-11-17 11:30:36.363614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.962 [2024-11-17 11:30:36.363626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.962 [2024-11-17 11:30:36.363655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.962 qpair failed and we were unable to recover it. 00:36:11.962 [2024-11-17 11:30:36.373406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.962 [2024-11-17 11:30:36.373494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.962 [2024-11-17 11:30:36.373522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.962 [2024-11-17 11:30:36.373545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.962 [2024-11-17 11:30:36.373558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.962 [2024-11-17 11:30:36.373588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.962 qpair failed and we were unable to recover it. 00:36:11.962 [2024-11-17 11:30:36.383503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.962 [2024-11-17 11:30:36.383646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.962 [2024-11-17 11:30:36.383676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.962 [2024-11-17 11:30:36.383690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.962 [2024-11-17 11:30:36.383702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.383731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.393553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.393641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.393672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.393686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.393698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.393728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.403562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.403658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.403684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.403697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.403709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.403738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.413534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.413615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.413639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.413652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.413664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.413694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.423550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.423644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.423669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.423682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.423694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.423724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.433611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.433752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.433778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.433793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.433811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.433842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.443602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.443691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.443718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.443731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.443744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.443775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.453732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.453812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.453836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.453849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.453861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.453891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.463685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.463816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.463842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.463855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.463867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.463896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.473694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.473817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.473843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.473857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.473868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.473898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.483729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.483860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.483886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.483900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.483911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.483940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.493745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.493827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.493852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.493866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.493878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.493907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.503739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.503820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.503845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.503859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.963 [2024-11-17 11:30:36.503871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.963 [2024-11-17 11:30:36.503901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.963 qpair failed and we were unable to recover it. 00:36:11.963 [2024-11-17 11:30:36.513846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.963 [2024-11-17 11:30:36.513935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.963 [2024-11-17 11:30:36.513961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.963 [2024-11-17 11:30:36.513975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.513987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.514017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.523838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.523927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.523963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.523979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.523991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.524021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.533845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.533930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.533955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.533968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.533980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.534010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.543884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.543972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.543996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.544009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.544021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.544051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.553988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.554084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.554109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.554123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.554135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.554165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.564000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.564098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.564123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.564142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.564155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.564185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.573989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.574073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.574096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.574109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.574121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.574151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.583991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.584082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.584107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.584120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.584132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.584161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.594089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.594173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.594199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.594213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.594225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.594255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.604084] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.604173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.604197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.604210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.604222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.604251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:11.964 [2024-11-17 11:30:36.614083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:11.964 [2024-11-17 11:30:36.614171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:11.964 [2024-11-17 11:30:36.614196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:11.964 [2024-11-17 11:30:36.614210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:11.964 [2024-11-17 11:30:36.614221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:11.964 [2024-11-17 11:30:36.614251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:11.964 qpair failed and we were unable to recover it. 00:36:12.224 [2024-11-17 11:30:36.624163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.224 [2024-11-17 11:30:36.624251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.224 [2024-11-17 11:30:36.624280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.224 [2024-11-17 11:30:36.624294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.224 [2024-11-17 11:30:36.624305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.224 [2024-11-17 11:30:36.624335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-11-17 11:30:36.634194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.224 [2024-11-17 11:30:36.634285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.224 [2024-11-17 11:30:36.634311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.224 [2024-11-17 11:30:36.634325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.224 [2024-11-17 11:30:36.634337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.224 [2024-11-17 11:30:36.634366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-11-17 11:30:36.644197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.224 [2024-11-17 11:30:36.644282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.224 [2024-11-17 11:30:36.644306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.224 [2024-11-17 11:30:36.644320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.224 [2024-11-17 11:30:36.644332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.224 [2024-11-17 11:30:36.644361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-11-17 11:30:36.654233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.224 [2024-11-17 11:30:36.654323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.224 [2024-11-17 11:30:36.654347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.224 [2024-11-17 11:30:36.654360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.224 [2024-11-17 11:30:36.654371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.224 [2024-11-17 11:30:36.654401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-11-17 11:30:36.664322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.224 [2024-11-17 11:30:36.664406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.224 [2024-11-17 11:30:36.664431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.224 [2024-11-17 11:30:36.664444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.224 [2024-11-17 11:30:36.664456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.224 [2024-11-17 11:30:36.664485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.224 qpair failed and we were unable to recover it. 00:36:12.224 [2024-11-17 11:30:36.674282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.224 [2024-11-17 11:30:36.674370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.224 [2024-11-17 11:30:36.674396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.224 [2024-11-17 11:30:36.674409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.674421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.674450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.684319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.684422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.684446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.684465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.684485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.684536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.694358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.694445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.694472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.694494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.694507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.694545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.704476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.704571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.704598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.704612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.704624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.704654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.714498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.714595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.714621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.714635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.714647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.714677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.724409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.724541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.724567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.724580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.724592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.724622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.734531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.734669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.734694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.734708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.734720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.734755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.744481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.744572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.744597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.744611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.744622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.744654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.754503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.754631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.754657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.754671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.754682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.754712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.764549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.764666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.764692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.764707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.764718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.764748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.774637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.774748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.774774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.774787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.774799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.774829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.784566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.784643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.784668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.784681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.784693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.784723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.794611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.794698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.794723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.794737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.225 [2024-11-17 11:30:36.794749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.225 [2024-11-17 11:30:36.794778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.225 qpair failed and we were unable to recover it. 00:36:12.225 [2024-11-17 11:30:36.804673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.225 [2024-11-17 11:30:36.804802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.225 [2024-11-17 11:30:36.804828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.225 [2024-11-17 11:30:36.804841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.804853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.804883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.814659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.814741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.814765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.814778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.814790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.814820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.824725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.824810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.824844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.824859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.824870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.824901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.834818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.834957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.834982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.834995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.835006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.835036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.844746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.844835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.844860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.844874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.844886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.844915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.854793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.854907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.854932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.854945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.854957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.854986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.864845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.864928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.864951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.864965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.864982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.865012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.226 [2024-11-17 11:30:36.874935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.226 [2024-11-17 11:30:36.875029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.226 [2024-11-17 11:30:36.875059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.226 [2024-11-17 11:30:36.875073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.226 [2024-11-17 11:30:36.875085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.226 [2024-11-17 11:30:36.875116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.226 qpair failed and we were unable to recover it. 00:36:12.485 [2024-11-17 11:30:36.884859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.485 [2024-11-17 11:30:36.884942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.485 [2024-11-17 11:30:36.884966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.485 [2024-11-17 11:30:36.884979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.485 [2024-11-17 11:30:36.884991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.485 [2024-11-17 11:30:36.885021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.485 qpair failed and we were unable to recover it. 00:36:12.485 [2024-11-17 11:30:36.894886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.485 [2024-11-17 11:30:36.894976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.485 [2024-11-17 11:30:36.895006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.485 [2024-11-17 11:30:36.895022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.485 [2024-11-17 11:30:36.895034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.485 [2024-11-17 11:30:36.895065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.485 qpair failed and we were unable to recover it. 00:36:12.485 [2024-11-17 11:30:36.904914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.485 [2024-11-17 11:30:36.905005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.485 [2024-11-17 11:30:36.905031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.485 [2024-11-17 11:30:36.905045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.485 [2024-11-17 11:30:36.905057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.485 [2024-11-17 11:30:36.905087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.485 qpair failed and we were unable to recover it. 00:36:12.485 [2024-11-17 11:30:36.914977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.485 [2024-11-17 11:30:36.915065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.485 [2024-11-17 11:30:36.915091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.485 [2024-11-17 11:30:36.915104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.485 [2024-11-17 11:30:36.915116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.485 [2024-11-17 11:30:36.915146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.485 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.925037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.925155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.925180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.925193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.925205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.925234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.935032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.935119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.935143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.935157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.935168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.935198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.945065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.945154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.945185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.945201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.945213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.945244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.955079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.955197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.955228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.955243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.955255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.955285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.965187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.965307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.965333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.965347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.965359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.965389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.975245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.975380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.975404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.975417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.975428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.975458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.985261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.985347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.985383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.985396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.985408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.985437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:36.995287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:36.995374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:36.995400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:36.995414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:36.995432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:36.995463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:37.005348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:37.005481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:37.005506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:37.005520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:37.005540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:37.005570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:37.015246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:37.015368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:37.015394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:37.015407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:37.015420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:37.015450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:37.025286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:37.025412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:37.025438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:37.025452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:37.025464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:37.025494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:37.035397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:37.035483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:37.035509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:37.035522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:37.035544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:37.035575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.486 [2024-11-17 11:30:37.045327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.486 [2024-11-17 11:30:37.045417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.486 [2024-11-17 11:30:37.045446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.486 [2024-11-17 11:30:37.045460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.486 [2024-11-17 11:30:37.045472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.486 [2024-11-17 11:30:37.045502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.486 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.055359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.055441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.055466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.055479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.055491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.055521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.065395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.065476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.065501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.065515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.065534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.065565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.075426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.075515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.075549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.075564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.075576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.075606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.085542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.085639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.085676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.085691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.085702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.085732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.095516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.095620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.095645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.095658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.095670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.095701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.105534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.105635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.105660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.105674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.105685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.105715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.115558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.115650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.115680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.115696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.115709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.115740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.125573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.125665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.125691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.125711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.125724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.125755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.487 [2024-11-17 11:30:37.135588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.487 [2024-11-17 11:30:37.135674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.487 [2024-11-17 11:30:37.135698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.487 [2024-11-17 11:30:37.135712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.487 [2024-11-17 11:30:37.135725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.487 [2024-11-17 11:30:37.135756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.487 qpair failed and we were unable to recover it. 00:36:12.746 [2024-11-17 11:30:37.145636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.746 [2024-11-17 11:30:37.145772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.746 [2024-11-17 11:30:37.145801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.746 [2024-11-17 11:30:37.145815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.746 [2024-11-17 11:30:37.145828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.746 [2024-11-17 11:30:37.145857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.746 qpair failed and we were unable to recover it. 00:36:12.746 [2024-11-17 11:30:37.155683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.746 [2024-11-17 11:30:37.155820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.746 [2024-11-17 11:30:37.155846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.746 [2024-11-17 11:30:37.155860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.746 [2024-11-17 11:30:37.155871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.746 [2024-11-17 11:30:37.155900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.746 qpair failed and we were unable to recover it. 00:36:12.746 [2024-11-17 11:30:37.165718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.165808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.165833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.165847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.165859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.165889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.175723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.175841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.175866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.175880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.175892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.175922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.185800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.185906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.185930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.185944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.185962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.186006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.195794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.195885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.195913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.195927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.195939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.195970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.205842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.205957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.205983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.205997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.206009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.206039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.215837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.215968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.215994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.216007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.216019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.216049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.225858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.225944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.225969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.225985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.225997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.226027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.235934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.236051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.236077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.236091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.236102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.236132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.245915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.246001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.246025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.246038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.246051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.246080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.256077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.256215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.256241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.256261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.256274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.256305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.265965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.266055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.266079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.266092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.266103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.266133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.276104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.276197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.276223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.276237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.276249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.276278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.286067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.747 [2024-11-17 11:30:37.286184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.747 [2024-11-17 11:30:37.286210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.747 [2024-11-17 11:30:37.286224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.747 [2024-11-17 11:30:37.286235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.747 [2024-11-17 11:30:37.286277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.747 qpair failed and we were unable to recover it. 00:36:12.747 [2024-11-17 11:30:37.296034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.296135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.296161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.296175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.296186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.296221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.306092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.306176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.306200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.306213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.306224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.306254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.316124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.316211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.316236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.316250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.316262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.316292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.326253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.326341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.326365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.326378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.326390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.326419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.336188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.336275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.336305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.336319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.336331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.336363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.346216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.346296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.346321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.346334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.346346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.346376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.356331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.356426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.356452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.356465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.356477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.356506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.366245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.366332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.366357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.366370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.366382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.366411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.376289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.376427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.376453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.376467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.376479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.376508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.386439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.386529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.386562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.386577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.386589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.386618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:12.748 [2024-11-17 11:30:37.396437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:12.748 [2024-11-17 11:30:37.396539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:12.748 [2024-11-17 11:30:37.396565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:12.748 [2024-11-17 11:30:37.396579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:12.748 [2024-11-17 11:30:37.396590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:12.748 [2024-11-17 11:30:37.396620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:12.748 qpair failed and we were unable to recover it. 00:36:13.007 [2024-11-17 11:30:37.406375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.007 [2024-11-17 11:30:37.406461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.007 [2024-11-17 11:30:37.406486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.007 [2024-11-17 11:30:37.406499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.007 [2024-11-17 11:30:37.406511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.007 [2024-11-17 11:30:37.406547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.007 qpair failed and we were unable to recover it. 00:36:13.007 [2024-11-17 11:30:37.416398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.007 [2024-11-17 11:30:37.416508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.007 [2024-11-17 11:30:37.416542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.007 [2024-11-17 11:30:37.416557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.007 [2024-11-17 11:30:37.416569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.007 [2024-11-17 11:30:37.416599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.007 qpair failed and we were unable to recover it. 00:36:13.007 [2024-11-17 11:30:37.426435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.007 [2024-11-17 11:30:37.426517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.426549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.426563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.426580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.426611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.436504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.436627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.436654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.436667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.436679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.436709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.446493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.446593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.446621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.446635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.446647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.446678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.456515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.456613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.456642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.456656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.456667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.456697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.466530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.466613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.466638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.466652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.466664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.466693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.476585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.476678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.476704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.476718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.476729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.476759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.486614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.486736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.486761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.486775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.486787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.486817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.496641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.496766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.496791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.496804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.496816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.496846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.506667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.506748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.506772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.506785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.506797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.506826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.516823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.516949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.516979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.516994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.517005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.517035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.526731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.526817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.526842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.526855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.526867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.526908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.536720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.536807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.536836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.536850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.536861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.536891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.546767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.546847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.546873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.008 [2024-11-17 11:30:37.546887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.008 [2024-11-17 11:30:37.546898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.008 [2024-11-17 11:30:37.546941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.008 qpair failed and we were unable to recover it. 00:36:13.008 [2024-11-17 11:30:37.556826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.008 [2024-11-17 11:30:37.556911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.008 [2024-11-17 11:30:37.556936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.556949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.556967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.556997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.566852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.566936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.566960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.566974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.566986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.567015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.576848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.576924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.576949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.576962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.576973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.577015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.586852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.586933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.586958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.586971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.586983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.587012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.596925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.597011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.597037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.597051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.597062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.597092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.606952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.607038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.607062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.607075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.607087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.607116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.616941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.617034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.617058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.617071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.617083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.617113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.627001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.627084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.627108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.627121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.627133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.627162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.637037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.637124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.637149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.637163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.637174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.637204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.647047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.647129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.647158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.647172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.647184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.647214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.009 [2024-11-17 11:30:37.657067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.009 [2024-11-17 11:30:37.657202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.009 [2024-11-17 11:30:37.657227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.009 [2024-11-17 11:30:37.657240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.009 [2024-11-17 11:30:37.657251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.009 [2024-11-17 11:30:37.657281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.009 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.667146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.667244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.667270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.667283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.667295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.667325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.677197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.677343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.677368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.677382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.677394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.677423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.687173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.687263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.687290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.687314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.687336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.687380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.697201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.697287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.697315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.697328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.697340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.697371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.707247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.707339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.707368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.707383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.707395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.707425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.717242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.717334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.717363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.717377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.717389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.717419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.727284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.727371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.727396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.727410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.727422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.727457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.737302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.737388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.737412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.737425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.737437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.737467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.747311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.747395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.747420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.747433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.747445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.747474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.757354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.757471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.757497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.757510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.757522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.757562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.767395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.767481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.767505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.767518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.767539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.767570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.269 [2024-11-17 11:30:37.777394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.269 [2024-11-17 11:30:37.777481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.269 [2024-11-17 11:30:37.777505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.269 [2024-11-17 11:30:37.777519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.269 [2024-11-17 11:30:37.777540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.269 [2024-11-17 11:30:37.777570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.269 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.787418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.787520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.787556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.787570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.787582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.787612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.797488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.797599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.797625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.797639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.797651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.797681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.807476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.807571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.807596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.807610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.807622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.807651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.817547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.817659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.817685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.817704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.817717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.817749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.827581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.827676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.827702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.827715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.827727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.827757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.837589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.837684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.837713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.837729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.837741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.837773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.847685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.847773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.847798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.847811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.847823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.847853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.857638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.857723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.857748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.857761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.857772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.857808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.867684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.867804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.867829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.867842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.867854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.867884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.877701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.877789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.877814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.877827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.877839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.877868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.887749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.887869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.887895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.887909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.887921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.887950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.897729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.897816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.897840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.897853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.897865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.897895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.907746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.270 [2024-11-17 11:30:37.907830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.270 [2024-11-17 11:30:37.907856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.270 [2024-11-17 11:30:37.907870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.270 [2024-11-17 11:30:37.907882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.270 [2024-11-17 11:30:37.907911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.270 qpair failed and we were unable to recover it. 00:36:13.270 [2024-11-17 11:30:37.917799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.271 [2024-11-17 11:30:37.917894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.271 [2024-11-17 11:30:37.917919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.271 [2024-11-17 11:30:37.917933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.271 [2024-11-17 11:30:37.917945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.271 [2024-11-17 11:30:37.917975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.271 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.927831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.927915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.927941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.927954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.927969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.927999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.937874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.937958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.937982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.937994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.938006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.938036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.947898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.947998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.948031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.948046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.948058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.948089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.957928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.958019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.958045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.958058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.958070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.958100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.967981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.968069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.968094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.968108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.968119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.968162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.978007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.978092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.978117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.978130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.978142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.978171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.988062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.988145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.988169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.988182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.988199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.988229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:37.998016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:37.998102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:37.998128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:37.998142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:37.998154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:37.998196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:38.008058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:38.008143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:38.008167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:38.008181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:38.008192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:38.008234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:38.018124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:38.018213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:38.018238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:38.018251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:38.018263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:38.018293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:38.028089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:38.028170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:38.028194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.530 [2024-11-17 11:30:38.028208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.530 [2024-11-17 11:30:38.028220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.530 [2024-11-17 11:30:38.028249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.530 qpair failed and we were unable to recover it. 00:36:13.530 [2024-11-17 11:30:38.038141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.530 [2024-11-17 11:30:38.038230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.530 [2024-11-17 11:30:38.038255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.038268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.038281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.038311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.048183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.048293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.048319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.048333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.048344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.048374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.058189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.058272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.058297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.058311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.058323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.058357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.068242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.068330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.068356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.068369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.068381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.068412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.078256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.078387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.078420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.078434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.078446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.078476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.088280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.088362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.088388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.088401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.088413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.088442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.098291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.098388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.098414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.098428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.098439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.098469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.108315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.108398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.108422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.108435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.108447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.108476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.118360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.118463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.118489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.118503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.118520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.118560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.128435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.128522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.128554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.128567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.128579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.128608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.138472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.138565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.138589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.138602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.138614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.138644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.531 qpair failed and we were unable to recover it. 00:36:13.531 [2024-11-17 11:30:38.148481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.531 [2024-11-17 11:30:38.148569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.531 [2024-11-17 11:30:38.148595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.531 [2024-11-17 11:30:38.148608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.531 [2024-11-17 11:30:38.148620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.531 [2024-11-17 11:30:38.148650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.532 qpair failed and we were unable to recover it. 00:36:13.532 [2024-11-17 11:30:38.158496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.532 [2024-11-17 11:30:38.158594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.532 [2024-11-17 11:30:38.158623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.532 [2024-11-17 11:30:38.158637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.532 [2024-11-17 11:30:38.158649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39b8000b90 00:36:13.532 [2024-11-17 11:30:38.158679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:13.532 qpair failed and we were unable to recover it. 00:36:13.532 [2024-11-17 11:30:38.168505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.532 [2024-11-17 11:30:38.168609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.532 [2024-11-17 11:30:38.168642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.532 [2024-11-17 11:30:38.168657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.532 [2024-11-17 11:30:38.168669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:13.532 [2024-11-17 11:30:38.168701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.532 qpair failed and we were unable to recover it. 00:36:13.532 [2024-11-17 11:30:38.178560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.532 [2024-11-17 11:30:38.178646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.532 [2024-11-17 11:30:38.178673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.532 [2024-11-17 11:30:38.178688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.532 [2024-11-17 11:30:38.178699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39c4000b90 00:36:13.532 [2024-11-17 11:30:38.178730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:13.532 qpair failed and we were unable to recover it. 00:36:13.790 [2024-11-17 11:30:38.188649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.790 [2024-11-17 11:30:38.188771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.790 [2024-11-17 11:30:38.188812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.790 [2024-11-17 11:30:38.188828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.790 [2024-11-17 11:30:38.188841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39bc000b90 00:36:13.790 [2024-11-17 11:30:38.188873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:13.790 qpair failed and we were unable to recover it. 00:36:13.790 [2024-11-17 11:30:38.198666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:13.790 [2024-11-17 11:30:38.198766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:13.790 [2024-11-17 11:30:38.198796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:13.790 [2024-11-17 11:30:38.198815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:13.790 [2024-11-17 11:30:38.198827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f39bc000b90 00:36:13.790 [2024-11-17 11:30:38.198858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:13.790 qpair failed and we were unable to recover it. 00:36:13.790 [2024-11-17 11:30:38.198963] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:13.790 A controller has encountered a failure and is being reset. 00:36:13.790 Controller properly reset. 00:36:13.790 Initializing NVMe Controllers 00:36:13.790 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:13.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:13.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:13.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:13.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:13.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:13.790 Initialization complete. Launching workers. 00:36:13.790 Starting thread on core 1 00:36:13.790 Starting thread on core 2 00:36:13.790 Starting thread on core 3 00:36:13.790 Starting thread on core 0 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:13.790 00:36:13.790 real 0m10.844s 00:36:13.790 user 0m19.489s 00:36:13.790 sys 0m5.146s 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.790 ************************************ 00:36:13.790 END TEST nvmf_target_disconnect_tc2 00:36:13.790 ************************************ 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.790 rmmod nvme_tcp 00:36:13.790 rmmod nvme_fabrics 00:36:13.790 rmmod nvme_keyring 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 403696 ']' 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 403696 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 403696 ']' 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 403696 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403696 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403696' 00:36:13.790 killing process with pid 403696 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 403696 00:36:13.790 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 403696 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:14.050 11:30:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.585 11:30:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:16.585 00:36:16.585 real 0m15.837s 00:36:16.585 user 0m46.249s 00:36:16.585 sys 0m7.205s 00:36:16.585 11:30:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.585 11:30:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:16.585 ************************************ 00:36:16.585 END TEST nvmf_target_disconnect 00:36:16.585 ************************************ 00:36:16.585 11:30:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:16.585 00:36:16.585 real 6m43.680s 00:36:16.585 user 17m17.404s 00:36:16.585 sys 1m26.430s 00:36:16.585 11:30:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.585 11:30:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:16.585 ************************************ 00:36:16.585 END TEST nvmf_host 00:36:16.585 ************************************ 00:36:16.585 11:30:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:16.585 11:30:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:16.585 11:30:40 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:16.585 11:30:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:16.585 11:30:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.585 11:30:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.585 ************************************ 00:36:16.585 START TEST nvmf_target_core_interrupt_mode 00:36:16.585 ************************************ 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:16.585 * Looking for test storage... 00:36:16.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.585 --rc genhtml_branch_coverage=1 00:36:16.585 --rc genhtml_function_coverage=1 00:36:16.585 --rc genhtml_legend=1 00:36:16.585 --rc geninfo_all_blocks=1 00:36:16.585 --rc geninfo_unexecuted_blocks=1 00:36:16.585 00:36:16.585 ' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.585 --rc genhtml_branch_coverage=1 00:36:16.585 --rc genhtml_function_coverage=1 00:36:16.585 --rc genhtml_legend=1 00:36:16.585 --rc geninfo_all_blocks=1 00:36:16.585 --rc geninfo_unexecuted_blocks=1 00:36:16.585 00:36:16.585 ' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.585 --rc genhtml_branch_coverage=1 00:36:16.585 --rc genhtml_function_coverage=1 00:36:16.585 --rc genhtml_legend=1 00:36:16.585 --rc geninfo_all_blocks=1 00:36:16.585 --rc geninfo_unexecuted_blocks=1 00:36:16.585 00:36:16.585 ' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:16.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.585 --rc genhtml_branch_coverage=1 00:36:16.585 --rc genhtml_function_coverage=1 00:36:16.585 --rc genhtml_legend=1 00:36:16.585 --rc geninfo_all_blocks=1 00:36:16.585 --rc geninfo_unexecuted_blocks=1 00:36:16.585 00:36:16.585 ' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.585 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:16.586 ************************************ 00:36:16.586 START TEST nvmf_abort 00:36:16.586 ************************************ 00:36:16.586 11:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:16.586 * Looking for test storage... 00:36:16.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.586 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:16.587 11:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:19.120 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:19.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:19.121 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:19.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:19.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:19.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:19.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:36:19.121 00:36:19.121 --- 10.0.0.2 ping statistics --- 00:36:19.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.121 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:19.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:19.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:36:19.121 00:36:19.121 --- 10.0.0.1 ping statistics --- 00:36:19.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:19.121 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=406506 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 406506 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 406506 ']' 00:36:19.121 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.122 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.122 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.122 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.122 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.122 [2024-11-17 11:30:43.572555] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:19.122 [2024-11-17 11:30:43.573680] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:19.122 [2024-11-17 11:30:43.573737] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.122 [2024-11-17 11:30:43.641394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:19.122 [2024-11-17 11:30:43.683880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.122 [2024-11-17 11:30:43.683941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.122 [2024-11-17 11:30:43.683965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.122 [2024-11-17 11:30:43.683975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.122 [2024-11-17 11:30:43.683986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.122 [2024-11-17 11:30:43.685358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.122 [2024-11-17 11:30:43.685424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.122 [2024-11-17 11:30:43.685420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.122 [2024-11-17 11:30:43.764752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:19.122 [2024-11-17 11:30:43.764927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:19.122 [2024-11-17 11:30:43.764936] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:19.122 [2024-11-17 11:30:43.765181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.379 [2024-11-17 11:30:43.818617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.379 Malloc0 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.379 Delay0 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.379 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 [2024-11-17 11:30:43.890366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.380 11:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:19.638 [2024-11-17 11:30:44.036634] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:21.537 Initializing NVMe Controllers 00:36:21.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:21.537 controller IO queue size 128 less than required 00:36:21.537 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:21.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:21.537 Initialization complete. Launching workers. 00:36:21.537 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29386 00:36:21.537 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29443, failed to submit 66 00:36:21.537 success 29386, unsuccessful 57, failed 0 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:21.537 rmmod nvme_tcp 00:36:21.537 rmmod nvme_fabrics 00:36:21.537 rmmod nvme_keyring 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 406506 ']' 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 406506 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 406506 ']' 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 406506 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.537 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406506 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406506' 00:36:21.796 killing process with pid 406506 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 406506 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 406506 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:21.796 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.797 11:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.338 00:36:24.338 real 0m7.500s 00:36:24.338 user 0m9.509s 00:36:24.338 sys 0m2.950s 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:24.338 ************************************ 00:36:24.338 END TEST nvmf_abort 00:36:24.338 ************************************ 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:24.338 ************************************ 00:36:24.338 START TEST nvmf_ns_hotplug_stress 00:36:24.338 ************************************ 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:24.338 * Looking for test storage... 00:36:24.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.338 --rc genhtml_branch_coverage=1 00:36:24.338 --rc genhtml_function_coverage=1 00:36:24.338 --rc genhtml_legend=1 00:36:24.338 --rc geninfo_all_blocks=1 00:36:24.338 --rc geninfo_unexecuted_blocks=1 00:36:24.338 00:36:24.338 ' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.338 --rc genhtml_branch_coverage=1 00:36:24.338 --rc genhtml_function_coverage=1 00:36:24.338 --rc genhtml_legend=1 00:36:24.338 --rc geninfo_all_blocks=1 00:36:24.338 --rc geninfo_unexecuted_blocks=1 00:36:24.338 00:36:24.338 ' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.338 --rc genhtml_branch_coverage=1 00:36:24.338 --rc genhtml_function_coverage=1 00:36:24.338 --rc genhtml_legend=1 00:36:24.338 --rc geninfo_all_blocks=1 00:36:24.338 --rc geninfo_unexecuted_blocks=1 00:36:24.338 00:36:24.338 ' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.338 --rc genhtml_branch_coverage=1 00:36:24.338 --rc genhtml_function_coverage=1 00:36:24.338 --rc genhtml_legend=1 00:36:24.338 --rc geninfo_all_blocks=1 00:36:24.338 --rc geninfo_unexecuted_blocks=1 00:36:24.338 00:36:24.338 ' 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.338 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.339 11:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:26.242 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:26.242 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:26.242 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:26.242 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:26.242 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:26.243 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:26.243 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:26.243 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:26.243 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:26.243 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:26.500 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:26.500 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:26.501 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:26.501 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:26.501 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:26.501 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:26.501 11:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:26.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:26.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:36:26.501 00:36:26.501 --- 10.0.0.2 ping statistics --- 00:36:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.501 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:26.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:26.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:36:26.501 00:36:26.501 --- 10.0.0.1 ping statistics --- 00:36:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:26.501 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=408852 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 408852 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 408852 ']' 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:26.501 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:26.501 [2024-11-17 11:30:51.085425] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:26.501 [2024-11-17 11:30:51.086611] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:36:26.501 [2024-11-17 11:30:51.086668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:26.760 [2024-11-17 11:30:51.157952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:26.760 [2024-11-17 11:30:51.202693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:26.760 [2024-11-17 11:30:51.202746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:26.760 [2024-11-17 11:30:51.202769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:26.760 [2024-11-17 11:30:51.202781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:26.760 [2024-11-17 11:30:51.202790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:26.760 [2024-11-17 11:30:51.204339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:26.760 [2024-11-17 11:30:51.204407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:26.760 [2024-11-17 11:30:51.204404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:26.760 [2024-11-17 11:30:51.286048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:26.760 [2024-11-17 11:30:51.286240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:26.760 [2024-11-17 11:30:51.286244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:26.760 [2024-11-17 11:30:51.286549] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:26.760 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:27.019 [2024-11-17 11:30:51.641103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.019 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:27.586 11:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:27.586 [2024-11-17 11:30:52.241417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.845 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:28.104 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:28.363 Malloc0 00:36:28.363 11:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:28.621 Delay0 00:36:28.621 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.880 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:29.139 NULL1 00:36:29.139 11:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:29.707 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=409148 00:36:29.707 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:29.707 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:29.707 11:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.641 Read completed with error (sct=0, sc=11) 00:36:30.641 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:30.900 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:30.900 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:31.163 true 00:36:31.163 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:31.163 11:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:32.109 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.367 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:32.367 11:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:32.625 true 00:36:32.625 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:32.625 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.883 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.141 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:33.141 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:33.400 true 00:36:33.400 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:33.400 11:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.658 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.916 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:33.916 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:34.174 true 00:36:34.174 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:34.174 11:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.105 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.363 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:35.363 11:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:35.623 true 00:36:35.623 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:35.623 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.881 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.139 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:36.139 11:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:36.397 true 00:36:36.397 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:36.397 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.655 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.913 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:36.913 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:37.171 true 00:36:37.171 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:37.171 11:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.103 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.361 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:38.361 11:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:38.619 true 00:36:38.619 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:38.619 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.878 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.136 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:39.136 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:39.394 true 00:36:39.394 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:39.394 11:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.652 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.911 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:39.911 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:40.168 true 00:36:40.168 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:40.168 11:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.102 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.360 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:41.360 11:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:41.618 true 00:36:41.618 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:41.618 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.876 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.134 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:42.134 11:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:42.392 true 00:36:42.392 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:42.392 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.650 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.908 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:42.908 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:43.166 true 00:36:43.424 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:43.424 11:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:44.359 11:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.617 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:44.617 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:44.876 true 00:36:44.876 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:44.876 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.134 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.393 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:45.393 11:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:45.662 true 00:36:45.662 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:45.662 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.926 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.184 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:46.184 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:46.443 true 00:36:46.443 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:46.443 11:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.379 11:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.637 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:47.637 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:47.896 true 00:36:47.896 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:47.896 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.154 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.412 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:48.412 11:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:48.671 true 00:36:48.671 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:48.671 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.929 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.187 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:49.187 11:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:49.445 true 00:36:49.445 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:49.445 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.379 11:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:50.638 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:50.638 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:50.896 true 00:36:50.896 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:50.896 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.154 11:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.412 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:51.412 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:51.670 true 00:36:51.670 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:51.670 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.927 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.186 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:52.186 11:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:52.444 true 00:36:52.701 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:52.702 11:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.636 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.893 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:53.893 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:54.150 true 00:36:54.150 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:54.150 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.407 11:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.664 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:54.664 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:54.921 true 00:36:54.921 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:54.921 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.179 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.438 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:55.438 11:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:55.695 true 00:36:55.695 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:55.695 11:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.628 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.628 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.887 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:56.887 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:57.144 true 00:36:57.144 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:57.144 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.402 11:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.661 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:57.661 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:57.918 true 00:36:57.918 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:57.918 11:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.852 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:58.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:58.852 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:58.852 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:59.110 true 00:36:59.110 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:59.110 11:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.368 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.932 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:59.933 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:59.933 true 00:36:59.933 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:36:59.933 11:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.865 Initializing NVMe Controllers 00:37:00.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:00.865 Controller IO queue size 128, less than required. 00:37:00.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.865 Controller IO queue size 128, less than required. 00:37:00.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:00.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:00.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:00.865 Initialization complete. Launching workers. 00:37:00.865 ======================================================== 00:37:00.865 Latency(us) 00:37:00.865 Device Information : IOPS MiB/s Average min max 00:37:00.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 556.84 0.27 101433.68 2832.52 1117169.86 00:37:00.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8566.45 4.18 14942.82 1688.55 446634.14 00:37:00.865 ======================================================== 00:37:00.865 Total : 9123.29 4.45 20221.83 1688.55 1117169.86 00:37:00.865 00:37:00.865 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:01.123 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:01.123 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:01.381 true 00:37:01.381 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409148 00:37:01.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (409148) - No such process 00:37:01.381 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 409148 00:37:01.381 11:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.639 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:01.905 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:01.905 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:01.905 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:01.905 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:01.905 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:02.162 null0 00:37:02.162 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:02.162 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:02.162 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:02.419 null1 00:37:02.419 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:02.419 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:02.419 11:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:02.677 null2 00:37:02.677 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:02.677 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:02.677 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:02.936 null3 00:37:02.936 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:02.936 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:02.936 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:03.193 null4 00:37:03.193 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:03.193 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:03.193 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:03.451 null5 00:37:03.451 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:03.451 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:03.451 11:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:03.709 null6 00:37:03.709 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:03.709 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:03.709 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:03.968 null7 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:03.968 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 413277 413278 413280 413282 413284 413286 413288 413290 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:03.969 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.228 11:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:04.487 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:04.755 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.018 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.018 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.018 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:05.018 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.018 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.018 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.276 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:05.534 11:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.792 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:05.793 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:06.051 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.310 11:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.569 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:06.828 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:07.087 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.087 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.087 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:07.345 11:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:07.604 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:07.863 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.122 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.123 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:08.381 11:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.640 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:08.921 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.921 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.921 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:08.921 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:08.921 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:08.921 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.203 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:09.493 11:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:09.761 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:10.019 rmmod nvme_tcp 00:37:10.019 rmmod nvme_fabrics 00:37:10.019 rmmod nvme_keyring 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 408852 ']' 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 408852 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 408852 ']' 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 408852 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408852 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408852' 00:37:10.019 killing process with pid 408852 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 408852 00:37:10.019 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 408852 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.280 11:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:12.821 00:37:12.821 real 0m48.346s 00:37:12.821 user 3m22.468s 00:37:12.821 sys 0m21.728s 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:12.821 ************************************ 00:37:12.821 END TEST nvmf_ns_hotplug_stress 00:37:12.821 ************************************ 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:12.821 ************************************ 00:37:12.821 START TEST nvmf_delete_subsystem 00:37:12.821 ************************************ 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:12.821 * Looking for test storage... 00:37:12.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:12.821 11:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.821 --rc genhtml_branch_coverage=1 00:37:12.821 --rc genhtml_function_coverage=1 00:37:12.821 --rc genhtml_legend=1 00:37:12.821 --rc geninfo_all_blocks=1 00:37:12.821 --rc geninfo_unexecuted_blocks=1 00:37:12.821 00:37:12.821 ' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.821 --rc genhtml_branch_coverage=1 00:37:12.821 --rc genhtml_function_coverage=1 00:37:12.821 --rc genhtml_legend=1 00:37:12.821 --rc geninfo_all_blocks=1 00:37:12.821 --rc geninfo_unexecuted_blocks=1 00:37:12.821 00:37:12.821 ' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.821 --rc genhtml_branch_coverage=1 00:37:12.821 --rc genhtml_function_coverage=1 00:37:12.821 --rc genhtml_legend=1 00:37:12.821 --rc geninfo_all_blocks=1 00:37:12.821 --rc geninfo_unexecuted_blocks=1 00:37:12.821 00:37:12.821 ' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:12.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.821 --rc genhtml_branch_coverage=1 00:37:12.821 --rc genhtml_function_coverage=1 00:37:12.821 --rc genhtml_legend=1 00:37:12.821 --rc geninfo_all_blocks=1 00:37:12.821 --rc geninfo_unexecuted_blocks=1 00:37:12.821 00:37:12.821 ' 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.821 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.822 11:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.731 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:14.732 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:14.732 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:14.732 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:14.732 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.732 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:37:14.993 00:37:14.993 --- 10.0.0.2 ping statistics --- 00:37:14.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.993 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:37:14.993 00:37:14.993 --- 10.0.0.1 ping statistics --- 00:37:14.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.993 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:14.993 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=416157 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 416157 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 416157 ']' 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.994 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:14.994 [2024-11-17 11:31:39.481720] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.994 [2024-11-17 11:31:39.482808] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:14.994 [2024-11-17 11:31:39.482880] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.994 [2024-11-17 11:31:39.552375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:14.994 [2024-11-17 11:31:39.593621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.994 [2024-11-17 11:31:39.593684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.994 [2024-11-17 11:31:39.593712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.994 [2024-11-17 11:31:39.593722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.994 [2024-11-17 11:31:39.593732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.994 [2024-11-17 11:31:39.594966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:14.994 [2024-11-17 11:31:39.594972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.253 [2024-11-17 11:31:39.673860] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:15.253 [2024-11-17 11:31:39.673912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:15.253 [2024-11-17 11:31:39.674131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 [2024-11-17 11:31:39.731672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 [2024-11-17 11:31:39.751933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 NULL1 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 Delay0 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=416181 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:15.253 11:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:15.253 [2024-11-17 11:31:39.830382] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:17.153 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:17.153 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.153 11:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 [2024-11-17 11:31:41.910057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687810 is same with the state(6) to be set 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 starting I/O failed: -6 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Read completed with error (sct=0, sc=8) 00:37:17.411 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 starting I/O failed: -6 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 [2024-11-17 11:31:41.911073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618000d4b0 is same with the state(6) to be set 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:17.412 Write completed with error (sct=0, sc=8) 00:37:17.412 Read completed with error (sct=0, sc=8) 00:37:18.347 [2024-11-17 11:31:42.886419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6955b0 is same with the state(6) to be set 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 [2024-11-17 11:31:42.913979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618000d7e0 is same with the state(6) to be set 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 [2024-11-17 11:31:42.914316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618000d020 is same with the state(6) to be set 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 [2024-11-17 11:31:42.914465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6873f0 is same with the state(6) to be set 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Read completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 Write completed with error (sct=0, sc=8) 00:37:18.347 [2024-11-17 11:31:42.915184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687b40 is same with the state(6) to be set 00:37:18.347 Initializing NVMe Controllers 00:37:18.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:18.347 Controller IO queue size 128, less than required. 00:37:18.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:18.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:18.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:18.347 Initialization complete. Launching workers. 00:37:18.347 ======================================================== 00:37:18.347 Latency(us) 00:37:18.347 Device Information : IOPS MiB/s Average min max 00:37:18.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154.93 0.08 933573.60 426.77 1046472.73 00:37:18.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.88 0.08 911362.63 411.35 1012233.86 00:37:18.347 ======================================================== 00:37:18.347 Total : 317.80 0.16 922190.47 411.35 1046472.73 00:37:18.347 00:37:18.347 [2024-11-17 11:31:42.915638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6955b0 (9): Bad file descriptor 00:37:18.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:18.347 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.347 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:18.347 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416181 00:37:18.347 11:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416181 00:37:18.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (416181) - No such process 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 416181 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 416181 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 416181 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.915 [2024-11-17 11:31:43.435853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=416585 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:18.915 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:18.915 [2024-11-17 11:31:43.493379] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:19.480 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:19.480 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:19.480 11:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:20.046 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:20.046 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:20.046 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:20.303 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:20.303 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:20.303 11:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:20.869 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:20.869 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:20.869 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:21.434 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:21.434 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:21.434 11:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:21.999 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:21.999 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:21.999 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:22.257 Initializing NVMe Controllers 00:37:22.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:22.257 Controller IO queue size 128, less than required. 00:37:22.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:22.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:22.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:22.257 Initialization complete. Launching workers. 00:37:22.257 ======================================================== 00:37:22.257 Latency(us) 00:37:22.257 Device Information : IOPS MiB/s Average min max 00:37:22.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003512.62 1000199.29 1011323.88 00:37:22.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007000.29 1000815.71 1042010.35 00:37:22.257 ======================================================== 00:37:22.257 Total : 256.00 0.12 1005256.45 1000199.29 1042010.35 00:37:22.257 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416585 00:37:22.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (416585) - No such process 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 416585 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:22.516 11:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:22.516 rmmod nvme_tcp 00:37:22.516 rmmod nvme_fabrics 00:37:22.516 rmmod nvme_keyring 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 416157 ']' 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 416157 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 416157 ']' 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 416157 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 416157 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 416157' 00:37:22.516 killing process with pid 416157 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 416157 00:37:22.516 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 416157 00:37:22.775 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:22.775 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:22.775 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:22.776 11:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:24.684 00:37:24.684 real 0m12.359s 00:37:24.684 user 0m24.390s 00:37:24.684 sys 0m3.804s 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:24.684 ************************************ 00:37:24.684 END TEST nvmf_delete_subsystem 00:37:24.684 ************************************ 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:24.684 ************************************ 00:37:24.684 START TEST nvmf_host_management 00:37:24.684 ************************************ 00:37:24.684 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:24.946 * Looking for test storage... 00:37:24.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:24.946 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:24.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.947 --rc genhtml_branch_coverage=1 00:37:24.947 --rc genhtml_function_coverage=1 00:37:24.947 --rc genhtml_legend=1 00:37:24.947 --rc geninfo_all_blocks=1 00:37:24.947 --rc geninfo_unexecuted_blocks=1 00:37:24.947 00:37:24.947 ' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:24.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.947 --rc genhtml_branch_coverage=1 00:37:24.947 --rc genhtml_function_coverage=1 00:37:24.947 --rc genhtml_legend=1 00:37:24.947 --rc geninfo_all_blocks=1 00:37:24.947 --rc geninfo_unexecuted_blocks=1 00:37:24.947 00:37:24.947 ' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:24.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.947 --rc genhtml_branch_coverage=1 00:37:24.947 --rc genhtml_function_coverage=1 00:37:24.947 --rc genhtml_legend=1 00:37:24.947 --rc geninfo_all_blocks=1 00:37:24.947 --rc geninfo_unexecuted_blocks=1 00:37:24.947 00:37:24.947 ' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:24.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:24.947 --rc genhtml_branch_coverage=1 00:37:24.947 --rc genhtml_function_coverage=1 00:37:24.947 --rc genhtml_legend=1 00:37:24.947 --rc geninfo_all_blocks=1 00:37:24.947 --rc geninfo_unexecuted_blocks=1 00:37:24.947 00:37:24.947 ' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:24.947 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:24.948 11:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:27.482 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:27.482 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:27.482 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:27.482 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:27.483 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:27.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:27.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:37:27.483 00:37:27.483 --- 10.0.0.2 ping statistics --- 00:37:27.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.483 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:27.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:27.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:37:27.483 00:37:27.483 --- 10.0.0.1 ping statistics --- 00:37:27.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:27.483 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=418929 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 418929 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 418929 ']' 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.483 [2024-11-17 11:31:51.736259] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:27.483 [2024-11-17 11:31:51.737341] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:27.483 [2024-11-17 11:31:51.737391] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:27.483 [2024-11-17 11:31:51.811561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:27.483 [2024-11-17 11:31:51.859173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:27.483 [2024-11-17 11:31:51.859231] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:27.483 [2024-11-17 11:31:51.859260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:27.483 [2024-11-17 11:31:51.859271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:27.483 [2024-11-17 11:31:51.859281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:27.483 [2024-11-17 11:31:51.860857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:27.483 [2024-11-17 11:31:51.864544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:27.483 [2024-11-17 11:31:51.864679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:27.483 [2024-11-17 11:31:51.864683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.483 [2024-11-17 11:31:51.946339] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:27.483 [2024-11-17 11:31:51.946559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:27.483 [2024-11-17 11:31:51.946901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:27.483 [2024-11-17 11:31:51.947383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:27.483 [2024-11-17 11:31:51.947659] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:27.483 11:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.483 [2024-11-17 11:31:52.005361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:27.483 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.484 Malloc0 00:37:27.484 [2024-11-17 11:31:52.085764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=419079 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 419079 /var/tmp/bdevperf.sock 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 419079 ']' 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:27.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:27.484 { 00:37:27.484 "params": { 00:37:27.484 "name": "Nvme$subsystem", 00:37:27.484 "trtype": "$TEST_TRANSPORT", 00:37:27.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.484 "adrfam": "ipv4", 00:37:27.484 "trsvcid": "$NVMF_PORT", 00:37:27.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.484 "hdgst": ${hdgst:-false}, 00:37:27.484 "ddgst": ${ddgst:-false} 00:37:27.484 }, 00:37:27.484 "method": "bdev_nvme_attach_controller" 00:37:27.484 } 00:37:27.484 EOF 00:37:27.484 )") 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:27.484 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:27.484 "params": { 00:37:27.484 "name": "Nvme0", 00:37:27.484 "trtype": "tcp", 00:37:27.484 "traddr": "10.0.0.2", 00:37:27.484 "adrfam": "ipv4", 00:37:27.484 "trsvcid": "4420", 00:37:27.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.484 "hdgst": false, 00:37:27.484 "ddgst": false 00:37:27.484 }, 00:37:27.484 "method": "bdev_nvme_attach_controller" 00:37:27.484 }' 00:37:27.742 [2024-11-17 11:31:52.171212] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:27.742 [2024-11-17 11:31:52.171284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419079 ] 00:37:27.742 [2024-11-17 11:31:52.241128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.742 [2024-11-17 11:31:52.288587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.000 Running I/O for 10 seconds... 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:28.258 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.518 11:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.518 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.518 [2024-11-17 11:31:53.021480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.021878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa381e0 is same with the state(6) to be set 00:37:28.518 [2024-11-17 11:31:53.022013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.518 [2024-11-17 11:31:53.022451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.518 [2024-11-17 11:31:53.022466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.022984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.022998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.519 [2024-11-17 11:31:53.023619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.519 [2024-11-17 11:31:53.023634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:28.520 [2024-11-17 11:31:53.023943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.023982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:28.520 [2024-11-17 11:31:53.024125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:28.520 [2024-11-17 11:31:53.024157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.024173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:28.520 [2024-11-17 11:31:53.024187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.024201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:28.520 [2024-11-17 11:31:53.024214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.024228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:28.520 [2024-11-17 11:31:53.024241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.024253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b34d70 is same with the state(6) to be set 00:37:28.520 [2024-11-17 11:31:53.025367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:28.520 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.520 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:28.520 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.520 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.520 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:28.520 00:37:28.520 Latency(us) 00:37:28.520 [2024-11-17T10:31:53.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.520 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:28.520 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:28.520 Verification LBA range: start 0x0 length 0x400 00:37:28.520 Nvme0n1 : 0.40 1599.12 99.95 159.91 0.00 35330.86 3907.89 34758.35 00:37:28.520 [2024-11-17T10:31:53.178Z] =================================================================================================================== 00:37:28.520 [2024-11-17T10:31:53.178Z] Total : 1599.12 99.95 159.91 0.00 35330.86 3907.89 34758.35 00:37:28.520 [2024-11-17 11:31:53.027239] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:28.520 [2024-11-17 11:31:53.027267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b34d70 (9): Bad file descriptor 00:37:28.520 [2024-11-17 11:31:53.028489] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:28.520 [2024-11-17 11:31:53.028612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:28.520 [2024-11-17 11:31:53.028641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:28.520 [2024-11-17 11:31:53.028668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:28.520 [2024-11-17 11:31:53.028684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:28.520 [2024-11-17 11:31:53.028698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:28.520 [2024-11-17 11:31:53.028710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b34d70 00:37:28.520 [2024-11-17 11:31:53.028744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b34d70 (9): Bad file descriptor 00:37:28.520 [2024-11-17 11:31:53.028770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:28.520 [2024-11-17 11:31:53.028785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:28.520 [2024-11-17 11:31:53.028801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:28.520 [2024-11-17 11:31:53.028816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:28.520 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.520 11:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 419079 00:37:29.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (419079) - No such process 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:29.452 { 00:37:29.452 "params": { 00:37:29.452 "name": "Nvme$subsystem", 00:37:29.452 "trtype": "$TEST_TRANSPORT", 00:37:29.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.452 "adrfam": "ipv4", 00:37:29.452 "trsvcid": "$NVMF_PORT", 00:37:29.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.452 "hdgst": ${hdgst:-false}, 00:37:29.452 "ddgst": ${ddgst:-false} 00:37:29.452 }, 00:37:29.452 "method": "bdev_nvme_attach_controller" 00:37:29.452 } 00:37:29.452 EOF 00:37:29.452 )") 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:29.452 11:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:29.452 "params": { 00:37:29.452 "name": "Nvme0", 00:37:29.452 "trtype": "tcp", 00:37:29.452 "traddr": "10.0.0.2", 00:37:29.452 "adrfam": "ipv4", 00:37:29.452 "trsvcid": "4420", 00:37:29.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:29.452 "hdgst": false, 00:37:29.452 "ddgst": false 00:37:29.452 }, 00:37:29.452 "method": "bdev_nvme_attach_controller" 00:37:29.452 }' 00:37:29.452 [2024-11-17 11:31:54.083048] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:29.453 [2024-11-17 11:31:54.083122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419334 ] 00:37:29.711 [2024-11-17 11:31:54.152048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.711 [2024-11-17 11:31:54.198079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.969 Running I/O for 1 seconds... 00:37:30.903 1536.00 IOPS, 96.00 MiB/s 00:37:30.903 Latency(us) 00:37:30.903 [2024-11-17T10:31:55.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.903 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:30.903 Verification LBA range: start 0x0 length 0x400 00:37:30.903 Nvme0n1 : 1.00 1593.53 99.60 0.00 0.00 39522.14 7621.59 36117.62 00:37:30.903 [2024-11-17T10:31:55.561Z] =================================================================================================================== 00:37:30.903 [2024-11-17T10:31:55.561Z] Total : 1593.53 99.60 0.00 0.00 39522.14 7621.59 36117.62 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:31.162 rmmod nvme_tcp 00:37:31.162 rmmod nvme_fabrics 00:37:31.162 rmmod nvme_keyring 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 418929 ']' 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 418929 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 418929 ']' 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 418929 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 418929 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 418929' 00:37:31.162 killing process with pid 418929 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 418929 00:37:31.162 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 418929 00:37:31.421 [2024-11-17 11:31:55.887029] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:31.421 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:31.422 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:31.422 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:31.422 11:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:33.328 00:37:33.328 real 0m8.646s 00:37:33.328 user 0m17.149s 00:37:33.328 sys 0m3.661s 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:33.328 ************************************ 00:37:33.328 END TEST nvmf_host_management 00:37:33.328 ************************************ 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:33.328 11:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:33.587 ************************************ 00:37:33.587 START TEST nvmf_lvol 00:37:33.587 ************************************ 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:33.587 * Looking for test storage... 00:37:33.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.587 --rc genhtml_branch_coverage=1 00:37:33.587 --rc genhtml_function_coverage=1 00:37:33.587 --rc genhtml_legend=1 00:37:33.587 --rc geninfo_all_blocks=1 00:37:33.587 --rc geninfo_unexecuted_blocks=1 00:37:33.587 00:37:33.587 ' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.587 --rc genhtml_branch_coverage=1 00:37:33.587 --rc genhtml_function_coverage=1 00:37:33.587 --rc genhtml_legend=1 00:37:33.587 --rc geninfo_all_blocks=1 00:37:33.587 --rc geninfo_unexecuted_blocks=1 00:37:33.587 00:37:33.587 ' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.587 --rc genhtml_branch_coverage=1 00:37:33.587 --rc genhtml_function_coverage=1 00:37:33.587 --rc genhtml_legend=1 00:37:33.587 --rc geninfo_all_blocks=1 00:37:33.587 --rc geninfo_unexecuted_blocks=1 00:37:33.587 00:37:33.587 ' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:33.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:33.587 --rc genhtml_branch_coverage=1 00:37:33.587 --rc genhtml_function_coverage=1 00:37:33.587 --rc genhtml_legend=1 00:37:33.587 --rc geninfo_all_blocks=1 00:37:33.587 --rc geninfo_unexecuted_blocks=1 00:37:33.587 00:37:33.587 ' 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:33.587 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:33.588 11:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:35.493 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:35.493 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:35.752 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:35.752 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:35.752 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:35.752 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:35.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:35.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:35.753 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:35.753 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:35.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:37:35.753 00:37:35.753 --- 10.0.0.2 ping statistics --- 00:37:35.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.753 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:37:35.753 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:35.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:37:35.753 00:37:35.753 --- 10.0.0.1 ping statistics --- 00:37:35.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.753 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=421435 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 421435 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 421435 ']' 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:35.754 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:35.754 [2024-11-17 11:32:00.405711] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:35.754 [2024-11-17 11:32:00.406787] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:35.754 [2024-11-17 11:32:00.406848] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:36.013 [2024-11-17 11:32:00.481293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:36.013 [2024-11-17 11:32:00.529799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:36.013 [2024-11-17 11:32:00.529866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:36.013 [2024-11-17 11:32:00.529880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:36.013 [2024-11-17 11:32:00.529891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:36.013 [2024-11-17 11:32:00.529901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:36.013 [2024-11-17 11:32:00.531295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.013 [2024-11-17 11:32:00.531362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.013 [2024-11-17 11:32:00.531365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.013 [2024-11-17 11:32:00.617366] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:36.013 [2024-11-17 11:32:00.617557] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:36.013 [2024-11-17 11:32:00.617571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:36.013 [2024-11-17 11:32:00.617810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:36.013 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:36.013 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:36.013 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:36.013 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.013 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:36.272 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:36.272 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:36.530 [2024-11-17 11:32:00.936130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.530 11:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:36.789 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:36.789 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:37.047 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:37.047 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:37.306 11:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:37.565 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=167ccdba-eadb-4e5b-bee8-5bad3613caea 00:37:37.565 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 167ccdba-eadb-4e5b-bee8-5bad3613caea lvol 20 00:37:37.824 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9884d94d-3999-4bdb-b68f-34d51ccfd463 00:37:37.824 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:38.082 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9884d94d-3999-4bdb-b68f-34d51ccfd463 00:37:38.341 11:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:38.600 [2024-11-17 11:32:03.180238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.600 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:38.858 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=421851 00:37:38.858 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:38.858 11:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:40.232 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9884d94d-3999-4bdb-b68f-34d51ccfd463 MY_SNAPSHOT 00:37:40.232 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1d18c276-7d20-45ec-b8b9-8e45a62d5a1e 00:37:40.232 11:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9884d94d-3999-4bdb-b68f-34d51ccfd463 30 00:37:40.490 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1d18c276-7d20-45ec-b8b9-8e45a62d5a1e MY_CLONE 00:37:40.748 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e9f15768-59c6-425a-8038-ab2e6e27bea0 00:37:40.748 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e9f15768-59c6-425a-8038-ab2e6e27bea0 00:37:41.313 11:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 421851 00:37:49.426 Initializing NVMe Controllers 00:37:49.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:49.426 Controller IO queue size 128, less than required. 00:37:49.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:49.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:49.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:49.426 Initialization complete. Launching workers. 00:37:49.426 ======================================================== 00:37:49.426 Latency(us) 00:37:49.426 Device Information : IOPS MiB/s Average min max 00:37:49.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10316.60 40.30 12415.27 2189.27 69585.15 00:37:49.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10462.50 40.87 12234.86 5285.98 67325.16 00:37:49.426 ======================================================== 00:37:49.426 Total : 20779.10 81.17 12324.43 2189.27 69585.15 00:37:49.426 00:37:49.426 11:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:49.685 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9884d94d-3999-4bdb-b68f-34d51ccfd463 00:37:49.943 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 167ccdba-eadb-4e5b-bee8-5bad3613caea 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:50.202 rmmod nvme_tcp 00:37:50.202 rmmod nvme_fabrics 00:37:50.202 rmmod nvme_keyring 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 421435 ']' 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 421435 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 421435 ']' 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 421435 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421435 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421435' 00:37:50.202 killing process with pid 421435 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 421435 00:37:50.202 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 421435 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.460 11:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.369 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:52.369 00:37:52.369 real 0m19.005s 00:37:52.369 user 0m55.535s 00:37:52.369 sys 0m8.091s 00:37:52.369 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.369 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:52.369 ************************************ 00:37:52.369 END TEST nvmf_lvol 00:37:52.369 ************************************ 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:52.630 ************************************ 00:37:52.630 START TEST nvmf_lvs_grow 00:37:52.630 ************************************ 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:52.630 * Looking for test storage... 00:37:52.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.630 --rc genhtml_branch_coverage=1 00:37:52.630 --rc genhtml_function_coverage=1 00:37:52.630 --rc genhtml_legend=1 00:37:52.630 --rc geninfo_all_blocks=1 00:37:52.630 --rc geninfo_unexecuted_blocks=1 00:37:52.630 00:37:52.630 ' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.630 --rc genhtml_branch_coverage=1 00:37:52.630 --rc genhtml_function_coverage=1 00:37:52.630 --rc genhtml_legend=1 00:37:52.630 --rc geninfo_all_blocks=1 00:37:52.630 --rc geninfo_unexecuted_blocks=1 00:37:52.630 00:37:52.630 ' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.630 --rc genhtml_branch_coverage=1 00:37:52.630 --rc genhtml_function_coverage=1 00:37:52.630 --rc genhtml_legend=1 00:37:52.630 --rc geninfo_all_blocks=1 00:37:52.630 --rc geninfo_unexecuted_blocks=1 00:37:52.630 00:37:52.630 ' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:52.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.630 --rc genhtml_branch_coverage=1 00:37:52.630 --rc genhtml_function_coverage=1 00:37:52.630 --rc genhtml_legend=1 00:37:52.630 --rc geninfo_all_blocks=1 00:37:52.630 --rc geninfo_unexecuted_blocks=1 00:37:52.630 00:37:52.630 ' 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:52.630 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:52.631 11:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:55.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:55.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.163 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:55.164 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:55.164 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:55.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:37:55.164 00:37:55.164 --- 10.0.0.2 ping statistics --- 00:37:55.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.164 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:55.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:37:55.164 00:37:55.164 --- 10.0.0.1 ping statistics --- 00:37:55.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.164 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=425105 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 425105 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 425105 ']' 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:55.164 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.164 [2024-11-17 11:32:19.550559] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:55.164 [2024-11-17 11:32:19.551622] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:55.164 [2024-11-17 11:32:19.551676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.164 [2024-11-17 11:32:19.623387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.164 [2024-11-17 11:32:19.668977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.164 [2024-11-17 11:32:19.669031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.164 [2024-11-17 11:32:19.669061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.164 [2024-11-17 11:32:19.669073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.165 [2024-11-17 11:32:19.669083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.165 [2024-11-17 11:32:19.669820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.165 [2024-11-17 11:32:19.751990] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:55.165 [2024-11-17 11:32:19.752304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.165 11:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:55.423 [2024-11-17 11:32:20.074626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.683 ************************************ 00:37:55.683 START TEST lvs_grow_clean 00:37:55.683 ************************************ 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.683 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:55.944 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:55.944 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:56.203 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:37:56.203 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:37:56.203 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:56.464 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:56.464 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:56.464 11:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 lvol 150 00:37:56.723 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=00720506-e7a3-4223-b194-f51a47be27e0 00:37:56.723 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:56.723 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:56.982 [2024-11-17 11:32:21.522458] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:56.982 [2024-11-17 11:32:21.522624] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:56.982 true 00:37:56.983 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:37:56.983 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:57.241 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:57.241 11:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:57.501 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00720506-e7a3-4223-b194-f51a47be27e0 00:37:57.760 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:58.019 [2024-11-17 11:32:22.607217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:58.019 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=425543 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 425543 /var/tmp/bdevperf.sock 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 425543 ']' 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:58.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.278 11:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:58.537 [2024-11-17 11:32:22.945615] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:37:58.537 [2024-11-17 11:32:22.945718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425543 ] 00:37:58.537 [2024-11-17 11:32:23.013732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:58.537 [2024-11-17 11:32:23.063863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.537 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:58.537 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:58.537 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:59.103 Nvme0n1 00:37:59.103 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:59.361 [ 00:37:59.361 { 00:37:59.361 "name": "Nvme0n1", 00:37:59.361 "aliases": [ 00:37:59.361 "00720506-e7a3-4223-b194-f51a47be27e0" 00:37:59.361 ], 00:37:59.361 "product_name": "NVMe disk", 00:37:59.361 "block_size": 4096, 00:37:59.361 "num_blocks": 38912, 00:37:59.361 "uuid": "00720506-e7a3-4223-b194-f51a47be27e0", 00:37:59.361 "numa_id": 0, 00:37:59.361 "assigned_rate_limits": { 00:37:59.361 "rw_ios_per_sec": 0, 00:37:59.361 "rw_mbytes_per_sec": 0, 00:37:59.361 "r_mbytes_per_sec": 0, 00:37:59.361 "w_mbytes_per_sec": 0 00:37:59.361 }, 00:37:59.361 "claimed": false, 00:37:59.361 "zoned": false, 00:37:59.361 "supported_io_types": { 00:37:59.361 "read": true, 00:37:59.361 "write": true, 00:37:59.361 "unmap": true, 00:37:59.361 "flush": true, 00:37:59.361 "reset": true, 00:37:59.361 "nvme_admin": true, 00:37:59.361 "nvme_io": true, 00:37:59.361 "nvme_io_md": false, 00:37:59.361 "write_zeroes": true, 00:37:59.361 "zcopy": false, 00:37:59.361 "get_zone_info": false, 00:37:59.361 "zone_management": false, 00:37:59.361 "zone_append": false, 00:37:59.361 "compare": true, 00:37:59.361 "compare_and_write": true, 00:37:59.361 "abort": true, 00:37:59.361 "seek_hole": false, 00:37:59.361 "seek_data": false, 00:37:59.361 "copy": true, 00:37:59.362 "nvme_iov_md": false 00:37:59.362 }, 00:37:59.362 "memory_domains": [ 00:37:59.362 { 00:37:59.362 "dma_device_id": "system", 00:37:59.362 "dma_device_type": 1 00:37:59.362 } 00:37:59.362 ], 00:37:59.362 "driver_specific": { 00:37:59.362 "nvme": [ 00:37:59.362 { 00:37:59.362 "trid": { 00:37:59.362 "trtype": "TCP", 00:37:59.362 "adrfam": "IPv4", 00:37:59.362 "traddr": "10.0.0.2", 00:37:59.362 "trsvcid": "4420", 00:37:59.362 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:59.362 }, 00:37:59.362 "ctrlr_data": { 00:37:59.362 "cntlid": 1, 00:37:59.362 "vendor_id": "0x8086", 00:37:59.362 "model_number": "SPDK bdev Controller", 00:37:59.362 "serial_number": "SPDK0", 00:37:59.362 "firmware_revision": "25.01", 00:37:59.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.362 "oacs": { 00:37:59.362 "security": 0, 00:37:59.362 "format": 0, 00:37:59.362 "firmware": 0, 00:37:59.362 "ns_manage": 0 00:37:59.362 }, 00:37:59.362 "multi_ctrlr": true, 00:37:59.362 "ana_reporting": false 00:37:59.362 }, 00:37:59.362 "vs": { 00:37:59.362 "nvme_version": "1.3" 00:37:59.362 }, 00:37:59.362 "ns_data": { 00:37:59.362 "id": 1, 00:37:59.362 "can_share": true 00:37:59.362 } 00:37:59.362 } 00:37:59.362 ], 00:37:59.362 "mp_policy": "active_passive" 00:37:59.362 } 00:37:59.362 } 00:37:59.362 ] 00:37:59.362 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=425678 00:37:59.362 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:59.362 11:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:59.621 Running I/O for 10 seconds... 00:38:00.561 Latency(us) 00:38:00.561 [2024-11-17T10:32:25.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:00.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.562 Nvme0n1 : 1.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:38:00.562 [2024-11-17T10:32:25.220Z] =================================================================================================================== 00:38:00.562 [2024-11-17T10:32:25.220Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:38:00.562 00:38:01.499 11:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:01.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:01.499 Nvme0n1 : 2.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:38:01.499 [2024-11-17T10:32:26.157Z] =================================================================================================================== 00:38:01.499 [2024-11-17T10:32:26.157Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:38:01.499 00:38:01.758 true 00:38:01.758 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:01.758 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:02.018 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:02.018 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:02.018 11:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 425678 00:38:02.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.586 Nvme0n1 : 3.00 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:38:02.586 [2024-11-17T10:32:27.244Z] =================================================================================================================== 00:38:02.586 [2024-11-17T10:32:27.244Z] Total : 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:38:02.586 00:38:03.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.520 Nvme0n1 : 4.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:38:03.520 [2024-11-17T10:32:28.178Z] =================================================================================================================== 00:38:03.520 [2024-11-17T10:32:28.178Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:38:03.520 00:38:04.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.498 Nvme0n1 : 5.00 15544.80 60.72 0.00 0.00 0.00 0.00 0.00 00:38:04.498 [2024-11-17T10:32:29.156Z] =================================================================================================================== 00:38:04.498 [2024-11-17T10:32:29.156Z] Total : 15544.80 60.72 0.00 0.00 0.00 0.00 0.00 00:38:04.498 00:38:05.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.473 Nvme0n1 : 6.00 15578.67 60.85 0.00 0.00 0.00 0.00 0.00 00:38:05.473 [2024-11-17T10:32:30.131Z] =================================================================================================================== 00:38:05.473 [2024-11-17T10:32:30.131Z] Total : 15578.67 60.85 0.00 0.00 0.00 0.00 0.00 00:38:05.473 00:38:06.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.409 Nvme0n1 : 7.00 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:38:06.409 [2024-11-17T10:32:31.067Z] =================================================================================================================== 00:38:06.409 [2024-11-17T10:32:31.067Z] Total : 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:38:06.409 00:38:07.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.784 Nvme0n1 : 8.00 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:38:07.784 [2024-11-17T10:32:32.442Z] =================================================================================================================== 00:38:07.784 [2024-11-17T10:32:32.442Z] Total : 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:38:07.784 00:38:08.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:08.719 Nvme0n1 : 9.00 15653.00 61.14 0.00 0.00 0.00 0.00 0.00 00:38:08.719 [2024-11-17T10:32:33.377Z] =================================================================================================================== 00:38:08.719 [2024-11-17T10:32:33.377Z] Total : 15653.00 61.14 0.00 0.00 0.00 0.00 0.00 00:38:08.719 00:38:09.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.653 Nvme0n1 : 10.00 15687.90 61.28 0.00 0.00 0.00 0.00 0.00 00:38:09.653 [2024-11-17T10:32:34.311Z] =================================================================================================================== 00:38:09.653 [2024-11-17T10:32:34.311Z] Total : 15687.90 61.28 0.00 0.00 0.00 0.00 0.00 00:38:09.653 00:38:09.653 00:38:09.653 Latency(us) 00:38:09.653 [2024-11-17T10:32:34.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.654 Nvme0n1 : 10.01 15688.53 61.28 0.00 0.00 8154.30 4247.70 20583.16 00:38:09.654 [2024-11-17T10:32:34.312Z] =================================================================================================================== 00:38:09.654 [2024-11-17T10:32:34.312Z] Total : 15688.53 61.28 0.00 0.00 8154.30 4247.70 20583.16 00:38:09.654 { 00:38:09.654 "results": [ 00:38:09.654 { 00:38:09.654 "job": "Nvme0n1", 00:38:09.654 "core_mask": "0x2", 00:38:09.654 "workload": "randwrite", 00:38:09.654 "status": "finished", 00:38:09.654 "queue_depth": 128, 00:38:09.654 "io_size": 4096, 00:38:09.654 "runtime": 10.007755, 00:38:09.654 "iops": 15688.533542237994, 00:38:09.654 "mibps": 61.283334149367164, 00:38:09.654 "io_failed": 0, 00:38:09.654 "io_timeout": 0, 00:38:09.654 "avg_latency_us": 8154.304643760869, 00:38:09.654 "min_latency_us": 4247.7037037037035, 00:38:09.654 "max_latency_us": 20583.158518518518 00:38:09.654 } 00:38:09.654 ], 00:38:09.654 "core_count": 1 00:38:09.654 } 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 425543 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 425543 ']' 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 425543 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425543 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425543' 00:38:09.654 killing process with pid 425543 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 425543 00:38:09.654 Received shutdown signal, test time was about 10.000000 seconds 00:38:09.654 00:38:09.654 Latency(us) 00:38:09.654 [2024-11-17T10:32:34.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.654 [2024-11-17T10:32:34.312Z] =================================================================================================================== 00:38:09.654 [2024-11-17T10:32:34.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:09.654 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 425543 00:38:09.912 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:10.171 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:10.430 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:10.430 11:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:10.689 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:10.689 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:10.689 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:10.948 [2024-11-17 11:32:35.394490] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:10.948 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:11.208 request: 00:38:11.208 { 00:38:11.208 "uuid": "d1f54a32-18b2-4f40-ab60-d12c77f5b834", 00:38:11.208 "method": "bdev_lvol_get_lvstores", 00:38:11.208 "req_id": 1 00:38:11.208 } 00:38:11.208 Got JSON-RPC error response 00:38:11.208 response: 00:38:11.208 { 00:38:11.208 "code": -19, 00:38:11.208 "message": "No such device" 00:38:11.208 } 00:38:11.208 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:11.208 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:11.208 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:11.208 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:11.208 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:11.468 aio_bdev 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00720506-e7a3-4223-b194-f51a47be27e0 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=00720506-e7a3-4223-b194-f51a47be27e0 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:11.468 11:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:11.727 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00720506-e7a3-4223-b194-f51a47be27e0 -t 2000 00:38:11.988 [ 00:38:11.988 { 00:38:11.988 "name": "00720506-e7a3-4223-b194-f51a47be27e0", 00:38:11.988 "aliases": [ 00:38:11.988 "lvs/lvol" 00:38:11.988 ], 00:38:11.988 "product_name": "Logical Volume", 00:38:11.988 "block_size": 4096, 00:38:11.988 "num_blocks": 38912, 00:38:11.988 "uuid": "00720506-e7a3-4223-b194-f51a47be27e0", 00:38:11.988 "assigned_rate_limits": { 00:38:11.988 "rw_ios_per_sec": 0, 00:38:11.988 "rw_mbytes_per_sec": 0, 00:38:11.988 "r_mbytes_per_sec": 0, 00:38:11.988 "w_mbytes_per_sec": 0 00:38:11.988 }, 00:38:11.988 "claimed": false, 00:38:11.988 "zoned": false, 00:38:11.988 "supported_io_types": { 00:38:11.988 "read": true, 00:38:11.988 "write": true, 00:38:11.988 "unmap": true, 00:38:11.988 "flush": false, 00:38:11.988 "reset": true, 00:38:11.988 "nvme_admin": false, 00:38:11.988 "nvme_io": false, 00:38:11.988 "nvme_io_md": false, 00:38:11.988 "write_zeroes": true, 00:38:11.988 "zcopy": false, 00:38:11.988 "get_zone_info": false, 00:38:11.988 "zone_management": false, 00:38:11.988 "zone_append": false, 00:38:11.988 "compare": false, 00:38:11.988 "compare_and_write": false, 00:38:11.988 "abort": false, 00:38:11.988 "seek_hole": true, 00:38:11.988 "seek_data": true, 00:38:11.988 "copy": false, 00:38:11.988 "nvme_iov_md": false 00:38:11.988 }, 00:38:11.988 "driver_specific": { 00:38:11.988 "lvol": { 00:38:11.988 "lvol_store_uuid": "d1f54a32-18b2-4f40-ab60-d12c77f5b834", 00:38:11.988 "base_bdev": "aio_bdev", 00:38:11.988 "thin_provision": false, 00:38:11.988 "num_allocated_clusters": 38, 00:38:11.988 "snapshot": false, 00:38:11.988 "clone": false, 00:38:11.988 "esnap_clone": false 00:38:11.988 } 00:38:11.988 } 00:38:11.988 } 00:38:11.988 ] 00:38:11.988 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:11.988 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:11.988 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:12.248 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:12.248 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:12.248 11:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:12.508 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:12.508 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00720506-e7a3-4223-b194-f51a47be27e0 00:38:12.767 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1f54a32-18b2-4f40-ab60-d12c77f5b834 00:38:13.026 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:13.284 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:13.285 00:38:13.285 real 0m17.790s 00:38:13.285 user 0m17.321s 00:38:13.285 sys 0m1.871s 00:38:13.285 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.285 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:13.285 ************************************ 00:38:13.285 END TEST lvs_grow_clean 00:38:13.285 ************************************ 00:38:13.285 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:13.285 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:13.285 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:13.285 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:13.544 ************************************ 00:38:13.544 START TEST lvs_grow_dirty 00:38:13.544 ************************************ 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:13.544 11:32:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:13.803 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:13.803 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:14.062 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=283d789d-18d4-4413-aaed-d370cbd9e579 00:38:14.062 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:14.062 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:14.320 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:14.320 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:14.320 11:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 283d789d-18d4-4413-aaed-d370cbd9e579 lvol 150 00:38:14.578 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:14.578 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:14.578 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:14.836 [2024-11-17 11:32:39.386442] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:14.836 [2024-11-17 11:32:39.386581] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:14.836 true 00:38:14.836 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:14.836 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:15.094 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:15.094 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:15.352 11:32:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:15.610 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:15.868 [2024-11-17 11:32:40.470828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:15.868 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=427697 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 427697 /var/tmp/bdevperf.sock 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 427697 ']' 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:16.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:16.127 11:32:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:16.386 [2024-11-17 11:32:40.794620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:16.386 [2024-11-17 11:32:40.794728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427697 ] 00:38:16.386 [2024-11-17 11:32:40.865455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.386 [2024-11-17 11:32:40.917046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:16.386 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.386 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:16.386 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:16.951 Nvme0n1 00:38:16.951 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:17.210 [ 00:38:17.210 { 00:38:17.210 "name": "Nvme0n1", 00:38:17.210 "aliases": [ 00:38:17.210 "24488183-54c8-41e9-a6e7-9124e0fa80e8" 00:38:17.210 ], 00:38:17.210 "product_name": "NVMe disk", 00:38:17.210 "block_size": 4096, 00:38:17.210 "num_blocks": 38912, 00:38:17.210 "uuid": "24488183-54c8-41e9-a6e7-9124e0fa80e8", 00:38:17.210 "numa_id": 0, 00:38:17.210 "assigned_rate_limits": { 00:38:17.210 "rw_ios_per_sec": 0, 00:38:17.210 "rw_mbytes_per_sec": 0, 00:38:17.210 "r_mbytes_per_sec": 0, 00:38:17.210 "w_mbytes_per_sec": 0 00:38:17.210 }, 00:38:17.210 "claimed": false, 00:38:17.210 "zoned": false, 00:38:17.210 "supported_io_types": { 00:38:17.210 "read": true, 00:38:17.210 "write": true, 00:38:17.210 "unmap": true, 00:38:17.210 "flush": true, 00:38:17.210 "reset": true, 00:38:17.210 "nvme_admin": true, 00:38:17.210 "nvme_io": true, 00:38:17.210 "nvme_io_md": false, 00:38:17.210 "write_zeroes": true, 00:38:17.210 "zcopy": false, 00:38:17.210 "get_zone_info": false, 00:38:17.210 "zone_management": false, 00:38:17.210 "zone_append": false, 00:38:17.210 "compare": true, 00:38:17.210 "compare_and_write": true, 00:38:17.210 "abort": true, 00:38:17.210 "seek_hole": false, 00:38:17.210 "seek_data": false, 00:38:17.210 "copy": true, 00:38:17.210 "nvme_iov_md": false 00:38:17.210 }, 00:38:17.210 "memory_domains": [ 00:38:17.210 { 00:38:17.210 "dma_device_id": "system", 00:38:17.210 "dma_device_type": 1 00:38:17.210 } 00:38:17.210 ], 00:38:17.210 "driver_specific": { 00:38:17.210 "nvme": [ 00:38:17.210 { 00:38:17.210 "trid": { 00:38:17.210 "trtype": "TCP", 00:38:17.210 "adrfam": "IPv4", 00:38:17.210 "traddr": "10.0.0.2", 00:38:17.210 "trsvcid": "4420", 00:38:17.210 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:17.210 }, 00:38:17.210 "ctrlr_data": { 00:38:17.210 "cntlid": 1, 00:38:17.210 "vendor_id": "0x8086", 00:38:17.210 "model_number": "SPDK bdev Controller", 00:38:17.210 "serial_number": "SPDK0", 00:38:17.210 "firmware_revision": "25.01", 00:38:17.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:17.210 "oacs": { 00:38:17.210 "security": 0, 00:38:17.210 "format": 0, 00:38:17.210 "firmware": 0, 00:38:17.210 "ns_manage": 0 00:38:17.210 }, 00:38:17.210 "multi_ctrlr": true, 00:38:17.210 "ana_reporting": false 00:38:17.210 }, 00:38:17.210 "vs": { 00:38:17.210 "nvme_version": "1.3" 00:38:17.210 }, 00:38:17.210 "ns_data": { 00:38:17.210 "id": 1, 00:38:17.210 "can_share": true 00:38:17.210 } 00:38:17.210 } 00:38:17.210 ], 00:38:17.210 "mp_policy": "active_passive" 00:38:17.210 } 00:38:17.210 } 00:38:17.210 ] 00:38:17.210 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=427800 00:38:17.210 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:17.210 11:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:17.468 Running I/O for 10 seconds... 00:38:18.404 Latency(us) 00:38:18.404 [2024-11-17T10:32:43.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:18.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.404 Nvme0n1 : 1.00 13517.00 52.80 0.00 0.00 0.00 0.00 0.00 00:38:18.404 [2024-11-17T10:32:43.062Z] =================================================================================================================== 00:38:18.404 [2024-11-17T10:32:43.062Z] Total : 13517.00 52.80 0.00 0.00 0.00 0.00 0.00 00:38:18.404 00:38:19.338 11:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:19.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.338 Nvme0n1 : 2.00 13630.50 53.24 0.00 0.00 0.00 0.00 0.00 00:38:19.338 [2024-11-17T10:32:43.996Z] =================================================================================================================== 00:38:19.338 [2024-11-17T10:32:43.996Z] Total : 13630.50 53.24 0.00 0.00 0.00 0.00 0.00 00:38:19.338 00:38:19.597 true 00:38:19.597 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:19.597 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:19.855 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:19.855 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:19.855 11:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 427800 00:38:20.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.421 Nvme0n1 : 3.00 13673.67 53.41 0.00 0.00 0.00 0.00 0.00 00:38:20.421 [2024-11-17T10:32:45.079Z] =================================================================================================================== 00:38:20.421 [2024-11-17T10:32:45.079Z] Total : 13673.67 53.41 0.00 0.00 0.00 0.00 0.00 00:38:20.421 00:38:21.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.355 Nvme0n1 : 4.00 13747.25 53.70 0.00 0.00 0.00 0.00 0.00 00:38:21.355 [2024-11-17T10:32:46.013Z] =================================================================================================================== 00:38:21.355 [2024-11-17T10:32:46.013Z] Total : 13747.25 53.70 0.00 0.00 0.00 0.00 0.00 00:38:21.355 00:38:22.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.728 Nvme0n1 : 5.00 13788.20 53.86 0.00 0.00 0.00 0.00 0.00 00:38:22.728 [2024-11-17T10:32:47.386Z] =================================================================================================================== 00:38:22.728 [2024-11-17T10:32:47.386Z] Total : 13788.20 53.86 0.00 0.00 0.00 0.00 0.00 00:38:22.728 00:38:23.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.664 Nvme0n1 : 6.00 13807.50 53.94 0.00 0.00 0.00 0.00 0.00 00:38:23.664 [2024-11-17T10:32:48.322Z] =================================================================================================================== 00:38:23.664 [2024-11-17T10:32:48.322Z] Total : 13807.50 53.94 0.00 0.00 0.00 0.00 0.00 00:38:23.664 00:38:24.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.598 Nvme0n1 : 7.00 13793.86 53.88 0.00 0.00 0.00 0.00 0.00 00:38:24.598 [2024-11-17T10:32:49.256Z] =================================================================================================================== 00:38:24.598 [2024-11-17T10:32:49.256Z] Total : 13793.86 53.88 0.00 0.00 0.00 0.00 0.00 00:38:24.598 00:38:25.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.531 Nvme0n1 : 8.00 13823.62 54.00 0.00 0.00 0.00 0.00 0.00 00:38:25.531 [2024-11-17T10:32:50.189Z] =================================================================================================================== 00:38:25.531 [2024-11-17T10:32:50.189Z] Total : 13823.62 54.00 0.00 0.00 0.00 0.00 0.00 00:38:25.531 00:38:26.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.465 Nvme0n1 : 9.00 13829.00 54.02 0.00 0.00 0.00 0.00 0.00 00:38:26.465 [2024-11-17T10:32:51.123Z] =================================================================================================================== 00:38:26.465 [2024-11-17T10:32:51.123Z] Total : 13829.00 54.02 0.00 0.00 0.00 0.00 0.00 00:38:26.465 00:38:27.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.400 Nvme0n1 : 10.00 13858.90 54.14 0.00 0.00 0.00 0.00 0.00 00:38:27.400 [2024-11-17T10:32:52.058Z] =================================================================================================================== 00:38:27.400 [2024-11-17T10:32:52.058Z] Total : 13858.90 54.14 0.00 0.00 0.00 0.00 0.00 00:38:27.400 00:38:27.400 00:38:27.400 Latency(us) 00:38:27.400 [2024-11-17T10:32:52.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.400 Nvme0n1 : 10.01 13859.58 54.14 0.00 0.00 9228.37 2742.80 12718.84 00:38:27.400 [2024-11-17T10:32:52.058Z] =================================================================================================================== 00:38:27.400 [2024-11-17T10:32:52.058Z] Total : 13859.58 54.14 0.00 0.00 9228.37 2742.80 12718.84 00:38:27.400 { 00:38:27.400 "results": [ 00:38:27.400 { 00:38:27.400 "job": "Nvme0n1", 00:38:27.400 "core_mask": "0x2", 00:38:27.400 "workload": "randwrite", 00:38:27.400 "status": "finished", 00:38:27.400 "queue_depth": 128, 00:38:27.400 "io_size": 4096, 00:38:27.400 "runtime": 10.007589, 00:38:27.400 "iops": 13859.58196324809, 00:38:27.400 "mibps": 54.138992043937854, 00:38:27.400 "io_failed": 0, 00:38:27.400 "io_timeout": 0, 00:38:27.401 "avg_latency_us": 9228.36560020529, 00:38:27.401 "min_latency_us": 2742.8029629629627, 00:38:27.401 "max_latency_us": 12718.838518518518 00:38:27.401 } 00:38:27.401 ], 00:38:27.401 "core_count": 1 00:38:27.401 } 00:38:27.401 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 427697 00:38:27.401 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 427697 ']' 00:38:27.401 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 427697 00:38:27.401 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:27.401 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:27.401 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427697 00:38:27.660 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:27.660 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:27.660 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427697' 00:38:27.660 killing process with pid 427697 00:38:27.660 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 427697 00:38:27.660 Received shutdown signal, test time was about 10.000000 seconds 00:38:27.660 00:38:27.660 Latency(us) 00:38:27.660 [2024-11-17T10:32:52.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.660 [2024-11-17T10:32:52.318Z] =================================================================================================================== 00:38:27.660 [2024-11-17T10:32:52.318Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:27.660 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 427697 00:38:27.660 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:27.918 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:28.484 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:28.484 11:32:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:28.484 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:28.484 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:28.484 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 425105 00:38:28.484 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 425105 00:38:28.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 425105 Killed "${NVMF_APP[@]}" "$@" 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=429050 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 429050 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 429050 ']' 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:28.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:28.744 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:28.744 [2024-11-17 11:32:53.205663] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:28.744 [2024-11-17 11:32:53.206796] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:28.744 [2024-11-17 11:32:53.206872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:28.744 [2024-11-17 11:32:53.279094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.744 [2024-11-17 11:32:53.324432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:28.744 [2024-11-17 11:32:53.324491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:28.744 [2024-11-17 11:32:53.324521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:28.744 [2024-11-17 11:32:53.324542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:28.744 [2024-11-17 11:32:53.324553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:28.744 [2024-11-17 11:32:53.325154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.002 [2024-11-17 11:32:53.409613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:29.002 [2024-11-17 11:32:53.409925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:29.003 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:29.261 [2024-11-17 11:32:53.819809] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:29.261 [2024-11-17 11:32:53.819966] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:29.261 [2024-11-17 11:32:53.820013] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:29.261 11:32:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:29.519 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 24488183-54c8-41e9-a6e7-9124e0fa80e8 -t 2000 00:38:29.778 [ 00:38:29.778 { 00:38:29.778 "name": "24488183-54c8-41e9-a6e7-9124e0fa80e8", 00:38:29.778 "aliases": [ 00:38:29.778 "lvs/lvol" 00:38:29.778 ], 00:38:29.778 "product_name": "Logical Volume", 00:38:29.778 "block_size": 4096, 00:38:29.778 "num_blocks": 38912, 00:38:29.778 "uuid": "24488183-54c8-41e9-a6e7-9124e0fa80e8", 00:38:29.778 "assigned_rate_limits": { 00:38:29.778 "rw_ios_per_sec": 0, 00:38:29.778 "rw_mbytes_per_sec": 0, 00:38:29.778 "r_mbytes_per_sec": 0, 00:38:29.778 "w_mbytes_per_sec": 0 00:38:29.778 }, 00:38:29.778 "claimed": false, 00:38:29.778 "zoned": false, 00:38:29.778 "supported_io_types": { 00:38:29.778 "read": true, 00:38:29.778 "write": true, 00:38:29.778 "unmap": true, 00:38:29.778 "flush": false, 00:38:29.778 "reset": true, 00:38:29.778 "nvme_admin": false, 00:38:29.778 "nvme_io": false, 00:38:29.778 "nvme_io_md": false, 00:38:29.778 "write_zeroes": true, 00:38:29.778 "zcopy": false, 00:38:29.778 "get_zone_info": false, 00:38:29.778 "zone_management": false, 00:38:29.778 "zone_append": false, 00:38:29.778 "compare": false, 00:38:29.778 "compare_and_write": false, 00:38:29.778 "abort": false, 00:38:29.778 "seek_hole": true, 00:38:29.778 "seek_data": true, 00:38:29.778 "copy": false, 00:38:29.778 "nvme_iov_md": false 00:38:29.778 }, 00:38:29.778 "driver_specific": { 00:38:29.778 "lvol": { 00:38:29.778 "lvol_store_uuid": "283d789d-18d4-4413-aaed-d370cbd9e579", 00:38:29.778 "base_bdev": "aio_bdev", 00:38:29.778 "thin_provision": false, 00:38:29.778 "num_allocated_clusters": 38, 00:38:29.778 "snapshot": false, 00:38:29.778 "clone": false, 00:38:29.778 "esnap_clone": false 00:38:29.778 } 00:38:29.778 } 00:38:29.778 } 00:38:29.778 ] 00:38:29.778 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:29.778 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:29.778 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:30.037 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:30.037 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:30.037 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:30.296 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:30.296 11:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:30.554 [2024-11-17 11:32:55.181874] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:30.813 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:31.072 request: 00:38:31.072 { 00:38:31.072 "uuid": "283d789d-18d4-4413-aaed-d370cbd9e579", 00:38:31.072 "method": "bdev_lvol_get_lvstores", 00:38:31.072 "req_id": 1 00:38:31.072 } 00:38:31.072 Got JSON-RPC error response 00:38:31.072 response: 00:38:31.072 { 00:38:31.072 "code": -19, 00:38:31.072 "message": "No such device" 00:38:31.072 } 00:38:31.072 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:31.072 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:31.072 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:31.072 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:31.072 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:31.331 aio_bdev 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:31.331 11:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:31.590 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 24488183-54c8-41e9-a6e7-9124e0fa80e8 -t 2000 00:38:31.849 [ 00:38:31.849 { 00:38:31.849 "name": "24488183-54c8-41e9-a6e7-9124e0fa80e8", 00:38:31.849 "aliases": [ 00:38:31.849 "lvs/lvol" 00:38:31.849 ], 00:38:31.849 "product_name": "Logical Volume", 00:38:31.849 "block_size": 4096, 00:38:31.849 "num_blocks": 38912, 00:38:31.849 "uuid": "24488183-54c8-41e9-a6e7-9124e0fa80e8", 00:38:31.849 "assigned_rate_limits": { 00:38:31.849 "rw_ios_per_sec": 0, 00:38:31.849 "rw_mbytes_per_sec": 0, 00:38:31.849 "r_mbytes_per_sec": 0, 00:38:31.849 "w_mbytes_per_sec": 0 00:38:31.849 }, 00:38:31.849 "claimed": false, 00:38:31.849 "zoned": false, 00:38:31.849 "supported_io_types": { 00:38:31.849 "read": true, 00:38:31.849 "write": true, 00:38:31.849 "unmap": true, 00:38:31.849 "flush": false, 00:38:31.849 "reset": true, 00:38:31.849 "nvme_admin": false, 00:38:31.849 "nvme_io": false, 00:38:31.849 "nvme_io_md": false, 00:38:31.849 "write_zeroes": true, 00:38:31.849 "zcopy": false, 00:38:31.849 "get_zone_info": false, 00:38:31.849 "zone_management": false, 00:38:31.849 "zone_append": false, 00:38:31.849 "compare": false, 00:38:31.849 "compare_and_write": false, 00:38:31.849 "abort": false, 00:38:31.849 "seek_hole": true, 00:38:31.849 "seek_data": true, 00:38:31.849 "copy": false, 00:38:31.849 "nvme_iov_md": false 00:38:31.849 }, 00:38:31.849 "driver_specific": { 00:38:31.849 "lvol": { 00:38:31.849 "lvol_store_uuid": "283d789d-18d4-4413-aaed-d370cbd9e579", 00:38:31.849 "base_bdev": "aio_bdev", 00:38:31.849 "thin_provision": false, 00:38:31.849 "num_allocated_clusters": 38, 00:38:31.849 "snapshot": false, 00:38:31.849 "clone": false, 00:38:31.849 "esnap_clone": false 00:38:31.849 } 00:38:31.849 } 00:38:31.849 } 00:38:31.849 ] 00:38:31.849 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:31.849 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:31.849 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:32.108 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:32.108 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:32.108 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:32.367 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:32.367 11:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24488183-54c8-41e9-a6e7-9124e0fa80e8 00:38:32.626 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 283d789d-18d4-4413-aaed-d370cbd9e579 00:38:32.884 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:33.143 00:38:33.143 real 0m19.746s 00:38:33.143 user 0m36.599s 00:38:33.143 sys 0m4.849s 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:33.143 ************************************ 00:38:33.143 END TEST lvs_grow_dirty 00:38:33.143 ************************************ 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:33.143 nvmf_trace.0 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:33.143 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:33.143 rmmod nvme_tcp 00:38:33.143 rmmod nvme_fabrics 00:38:33.402 rmmod nvme_keyring 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 429050 ']' 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 429050 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 429050 ']' 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 429050 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429050 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429050' 00:38:33.402 killing process with pid 429050 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 429050 00:38:33.402 11:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 429050 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:33.402 11:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:35.942 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:35.942 00:38:35.942 real 0m43.033s 00:38:35.942 user 0m55.676s 00:38:35.942 sys 0m8.689s 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:35.943 ************************************ 00:38:35.943 END TEST nvmf_lvs_grow 00:38:35.943 ************************************ 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:35.943 ************************************ 00:38:35.943 START TEST nvmf_bdev_io_wait 00:38:35.943 ************************************ 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:35.943 * Looking for test storage... 00:38:35.943 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.943 --rc genhtml_branch_coverage=1 00:38:35.943 --rc genhtml_function_coverage=1 00:38:35.943 --rc genhtml_legend=1 00:38:35.943 --rc geninfo_all_blocks=1 00:38:35.943 --rc geninfo_unexecuted_blocks=1 00:38:35.943 00:38:35.943 ' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.943 --rc genhtml_branch_coverage=1 00:38:35.943 --rc genhtml_function_coverage=1 00:38:35.943 --rc genhtml_legend=1 00:38:35.943 --rc geninfo_all_blocks=1 00:38:35.943 --rc geninfo_unexecuted_blocks=1 00:38:35.943 00:38:35.943 ' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.943 --rc genhtml_branch_coverage=1 00:38:35.943 --rc genhtml_function_coverage=1 00:38:35.943 --rc genhtml_legend=1 00:38:35.943 --rc geninfo_all_blocks=1 00:38:35.943 --rc geninfo_unexecuted_blocks=1 00:38:35.943 00:38:35.943 ' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:35.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:35.943 --rc genhtml_branch_coverage=1 00:38:35.943 --rc genhtml_function_coverage=1 00:38:35.943 --rc genhtml_legend=1 00:38:35.943 --rc geninfo_all_blocks=1 00:38:35.943 --rc geninfo_unexecuted_blocks=1 00:38:35.943 00:38:35.943 ' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.943 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:35.944 11:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:37.864 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:37.864 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:37.864 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:37.864 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:37.864 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:37.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:37.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:38:37.865 00:38:37.865 --- 10.0.0.2 ping statistics --- 00:38:37.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.865 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:37.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:37.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:38:37.865 00:38:37.865 --- 10.0.0.1 ping statistics --- 00:38:37.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.865 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=431802 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 431802 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 431802 ']' 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:37.865 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.123 [2024-11-17 11:33:02.563687] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:38.123 [2024-11-17 11:33:02.564832] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:38.123 [2024-11-17 11:33:02.564882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:38.123 [2024-11-17 11:33:02.636376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:38.123 [2024-11-17 11:33:02.683550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:38.123 [2024-11-17 11:33:02.683619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:38.123 [2024-11-17 11:33:02.683647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:38.124 [2024-11-17 11:33:02.683658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:38.124 [2024-11-17 11:33:02.683667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:38.124 [2024-11-17 11:33:02.685207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.124 [2024-11-17 11:33:02.685271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:38.124 [2024-11-17 11:33:02.685336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:38.124 [2024-11-17 11:33:02.685339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.124 [2024-11-17 11:33:02.685853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.383 [2024-11-17 11:33:02.895438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:38.383 [2024-11-17 11:33:02.895652] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:38.383 [2024-11-17 11:33:02.896555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:38.383 [2024-11-17 11:33:02.897375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.383 [2024-11-17 11:33:02.902070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.383 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.384 Malloc0 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.384 [2024-11-17 11:33:02.962261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=431825 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=431826 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=431829 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:38.384 { 00:38:38.384 "params": { 00:38:38.384 "name": "Nvme$subsystem", 00:38:38.384 "trtype": "$TEST_TRANSPORT", 00:38:38.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.384 "adrfam": "ipv4", 00:38:38.384 "trsvcid": "$NVMF_PORT", 00:38:38.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.384 "hdgst": ${hdgst:-false}, 00:38:38.384 "ddgst": ${ddgst:-false} 00:38:38.384 }, 00:38:38.384 "method": "bdev_nvme_attach_controller" 00:38:38.384 } 00:38:38.384 EOF 00:38:38.384 )") 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=431831 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:38.384 { 00:38:38.384 "params": { 00:38:38.384 "name": "Nvme$subsystem", 00:38:38.384 "trtype": "$TEST_TRANSPORT", 00:38:38.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.384 "adrfam": "ipv4", 00:38:38.384 "trsvcid": "$NVMF_PORT", 00:38:38.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.384 "hdgst": ${hdgst:-false}, 00:38:38.384 "ddgst": ${ddgst:-false} 00:38:38.384 }, 00:38:38.384 "method": "bdev_nvme_attach_controller" 00:38:38.384 } 00:38:38.384 EOF 00:38:38.384 )") 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:38.384 { 00:38:38.384 "params": { 00:38:38.384 "name": "Nvme$subsystem", 00:38:38.384 "trtype": "$TEST_TRANSPORT", 00:38:38.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.384 "adrfam": "ipv4", 00:38:38.384 "trsvcid": "$NVMF_PORT", 00:38:38.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.384 "hdgst": ${hdgst:-false}, 00:38:38.384 "ddgst": ${ddgst:-false} 00:38:38.384 }, 00:38:38.384 "method": "bdev_nvme_attach_controller" 00:38:38.384 } 00:38:38.384 EOF 00:38:38.384 )") 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:38.384 { 00:38:38.384 "params": { 00:38:38.384 "name": "Nvme$subsystem", 00:38:38.384 "trtype": "$TEST_TRANSPORT", 00:38:38.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.384 "adrfam": "ipv4", 00:38:38.384 "trsvcid": "$NVMF_PORT", 00:38:38.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.384 "hdgst": ${hdgst:-false}, 00:38:38.384 "ddgst": ${ddgst:-false} 00:38:38.384 }, 00:38:38.384 "method": "bdev_nvme_attach_controller" 00:38:38.384 } 00:38:38.384 EOF 00:38:38.384 )") 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 431825 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:38.384 "params": { 00:38:38.384 "name": "Nvme1", 00:38:38.384 "trtype": "tcp", 00:38:38.384 "traddr": "10.0.0.2", 00:38:38.384 "adrfam": "ipv4", 00:38:38.384 "trsvcid": "4420", 00:38:38.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:38.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:38.384 "hdgst": false, 00:38:38.384 "ddgst": false 00:38:38.384 }, 00:38:38.384 "method": "bdev_nvme_attach_controller" 00:38:38.384 }' 00:38:38.384 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:38.385 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:38.385 "params": { 00:38:38.385 "name": "Nvme1", 00:38:38.385 "trtype": "tcp", 00:38:38.385 "traddr": "10.0.0.2", 00:38:38.385 "adrfam": "ipv4", 00:38:38.385 "trsvcid": "4420", 00:38:38.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:38.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:38.385 "hdgst": false, 00:38:38.385 "ddgst": false 00:38:38.385 }, 00:38:38.385 "method": "bdev_nvme_attach_controller" 00:38:38.385 }' 00:38:38.385 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:38.385 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:38.385 "params": { 00:38:38.385 "name": "Nvme1", 00:38:38.385 "trtype": "tcp", 00:38:38.385 "traddr": "10.0.0.2", 00:38:38.385 "adrfam": "ipv4", 00:38:38.385 "trsvcid": "4420", 00:38:38.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:38.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:38.385 "hdgst": false, 00:38:38.385 "ddgst": false 00:38:38.385 }, 00:38:38.385 "method": "bdev_nvme_attach_controller" 00:38:38.385 }' 00:38:38.385 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:38.385 11:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:38.385 "params": { 00:38:38.385 "name": "Nvme1", 00:38:38.385 "trtype": "tcp", 00:38:38.385 "traddr": "10.0.0.2", 00:38:38.385 "adrfam": "ipv4", 00:38:38.385 "trsvcid": "4420", 00:38:38.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:38.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:38.385 "hdgst": false, 00:38:38.385 "ddgst": false 00:38:38.385 }, 00:38:38.385 "method": "bdev_nvme_attach_controller" 00:38:38.385 }' 00:38:38.385 [2024-11-17 11:33:03.012913] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:38.385 [2024-11-17 11:33:03.012913] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:38.385 [2024-11-17 11:33:03.012992] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 11:33:03.012992] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:38.385 --proc-type=auto ] 00:38:38.385 [2024-11-17 11:33:03.013017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:38.385 [2024-11-17 11:33:03.013018] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:38.385 [2024-11-17 11:33:03.013096] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-11-17 11:33:03.013098] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:38:38.385 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:38.643 [2024-11-17 11:33:03.200121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.643 [2024-11-17 11:33:03.242473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:38.902 [2024-11-17 11:33:03.300634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.902 [2024-11-17 11:33:03.341991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:38.902 [2024-11-17 11:33:03.372977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.902 [2024-11-17 11:33:03.410357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:38.902 [2024-11-17 11:33:03.445725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.902 [2024-11-17 11:33:03.483139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:39.160 Running I/O for 1 seconds... 00:38:39.160 Running I/O for 1 seconds... 00:38:39.160 Running I/O for 1 seconds... 00:38:39.160 Running I/O for 1 seconds... 00:38:40.095 9645.00 IOPS, 37.68 MiB/s 00:38:40.095 Latency(us) 00:38:40.095 [2024-11-17T10:33:04.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.095 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:40.095 Nvme1n1 : 1.01 9706.92 37.92 0.00 0.00 13133.40 1941.81 15340.28 00:38:40.095 [2024-11-17T10:33:04.753Z] =================================================================================================================== 00:38:40.095 [2024-11-17T10:33:04.753Z] Total : 9706.92 37.92 0.00 0.00 13133.40 1941.81 15340.28 00:38:40.095 194352.00 IOPS, 759.19 MiB/s 00:38:40.095 Latency(us) 00:38:40.095 [2024-11-17T10:33:04.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.095 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:40.095 Nvme1n1 : 1.00 193991.61 757.78 0.00 0.00 656.33 291.27 1844.72 00:38:40.095 [2024-11-17T10:33:04.753Z] =================================================================================================================== 00:38:40.095 [2024-11-17T10:33:04.753Z] Total : 193991.61 757.78 0.00 0.00 656.33 291.27 1844.72 00:38:40.095 8921.00 IOPS, 34.85 MiB/s 00:38:40.095 Latency(us) 00:38:40.095 [2024-11-17T10:33:04.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.095 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:40.095 Nvme1n1 : 1.01 8971.12 35.04 0.00 0.00 14203.50 4878.79 18350.08 00:38:40.095 [2024-11-17T10:33:04.753Z] =================================================================================================================== 00:38:40.095 [2024-11-17T10:33:04.753Z] Total : 8971.12 35.04 0.00 0.00 14203.50 4878.79 18350.08 00:38:40.095 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 431826 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 431829 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 431831 00:38:40.353 9657.00 IOPS, 37.72 MiB/s 00:38:40.353 Latency(us) 00:38:40.353 [2024-11-17T10:33:05.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.353 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:40.353 Nvme1n1 : 1.01 9740.33 38.05 0.00 0.00 13100.78 4126.34 19126.80 00:38:40.353 [2024-11-17T10:33:05.011Z] =================================================================================================================== 00:38:40.353 [2024-11-17T10:33:05.011Z] Total : 9740.33 38.05 0.00 0.00 13100.78 4126.34 19126.80 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:40.353 rmmod nvme_tcp 00:38:40.353 rmmod nvme_fabrics 00:38:40.353 rmmod nvme_keyring 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 431802 ']' 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 431802 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 431802 ']' 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 431802 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:40.353 11:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 431802 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 431802' 00:38:40.614 killing process with pid 431802 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 431802 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 431802 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.614 11:33:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.146 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.146 00:38:43.146 real 0m7.119s 00:38:43.146 user 0m13.887s 00:38:43.146 sys 0m3.923s 00:38:43.146 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:43.146 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:43.147 ************************************ 00:38:43.147 END TEST nvmf_bdev_io_wait 00:38:43.147 ************************************ 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:43.147 ************************************ 00:38:43.147 START TEST nvmf_queue_depth 00:38:43.147 ************************************ 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:43.147 * Looking for test storage... 00:38:43.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.147 --rc genhtml_branch_coverage=1 00:38:43.147 --rc genhtml_function_coverage=1 00:38:43.147 --rc genhtml_legend=1 00:38:43.147 --rc geninfo_all_blocks=1 00:38:43.147 --rc geninfo_unexecuted_blocks=1 00:38:43.147 00:38:43.147 ' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.147 --rc genhtml_branch_coverage=1 00:38:43.147 --rc genhtml_function_coverage=1 00:38:43.147 --rc genhtml_legend=1 00:38:43.147 --rc geninfo_all_blocks=1 00:38:43.147 --rc geninfo_unexecuted_blocks=1 00:38:43.147 00:38:43.147 ' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.147 --rc genhtml_branch_coverage=1 00:38:43.147 --rc genhtml_function_coverage=1 00:38:43.147 --rc genhtml_legend=1 00:38:43.147 --rc geninfo_all_blocks=1 00:38:43.147 --rc geninfo_unexecuted_blocks=1 00:38:43.147 00:38:43.147 ' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:43.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.147 --rc genhtml_branch_coverage=1 00:38:43.147 --rc genhtml_function_coverage=1 00:38:43.147 --rc genhtml_legend=1 00:38:43.147 --rc geninfo_all_blocks=1 00:38:43.147 --rc geninfo_unexecuted_blocks=1 00:38:43.147 00:38:43.147 ' 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.147 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:43.148 11:33:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:45.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:45.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:45.050 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:45.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:45.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:45.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:45.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:38:45.051 00:38:45.051 --- 10.0.0.2 ping statistics --- 00:38:45.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:45.051 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:45.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:45.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:38:45.051 00:38:45.051 --- 10.0.0.1 ping statistics --- 00:38:45.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:45.051 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=434554 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 434554 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 434554 ']' 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.051 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.051 [2024-11-17 11:33:09.603780] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:45.051 [2024-11-17 11:33:09.604880] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:45.051 [2024-11-17 11:33:09.604945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:45.051 [2024-11-17 11:33:09.678299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.310 [2024-11-17 11:33:09.727395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:45.310 [2024-11-17 11:33:09.727460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:45.310 [2024-11-17 11:33:09.727489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:45.310 [2024-11-17 11:33:09.727501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:45.310 [2024-11-17 11:33:09.727512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:45.310 [2024-11-17 11:33:09.728150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.310 [2024-11-17 11:33:09.821203] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:45.310 [2024-11-17 11:33:09.821561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.310 [2024-11-17 11:33:09.876760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.310 Malloc0 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.310 [2024-11-17 11:33:09.936931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=434575 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 434575 /var/tmp/bdevperf.sock 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 434575 ']' 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:45.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.310 11:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.568 [2024-11-17 11:33:09.984072] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:38:45.568 [2024-11-17 11:33:09.984150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434575 ] 00:38:45.568 [2024-11-17 11:33:10.054848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.568 [2024-11-17 11:33:10.100498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.568 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:45.568 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:45.568 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:45.568 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.568 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.827 NVMe0n1 00:38:45.827 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.827 11:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:45.827 Running I/O for 10 seconds... 00:38:48.135 8251.00 IOPS, 32.23 MiB/s [2024-11-17T10:33:13.726Z] 8700.50 IOPS, 33.99 MiB/s [2024-11-17T10:33:14.661Z] 8753.00 IOPS, 34.19 MiB/s [2024-11-17T10:33:15.595Z] 8736.00 IOPS, 34.12 MiB/s [2024-11-17T10:33:16.528Z] 8809.20 IOPS, 34.41 MiB/s [2024-11-17T10:33:17.462Z] 8873.67 IOPS, 34.66 MiB/s [2024-11-17T10:33:18.835Z] 8886.57 IOPS, 34.71 MiB/s [2024-11-17T10:33:19.767Z] 8884.25 IOPS, 34.70 MiB/s [2024-11-17T10:33:20.701Z] 8890.67 IOPS, 34.73 MiB/s [2024-11-17T10:33:20.701Z] 8907.50 IOPS, 34.79 MiB/s 00:38:56.043 Latency(us) 00:38:56.043 [2024-11-17T10:33:20.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.043 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:56.043 Verification LBA range: start 0x0 length 0x4000 00:38:56.043 NVMe0n1 : 10.08 8937.77 34.91 0.00 0.00 114113.97 22427.88 71458.51 00:38:56.043 [2024-11-17T10:33:20.701Z] =================================================================================================================== 00:38:56.043 [2024-11-17T10:33:20.701Z] Total : 8937.77 34.91 0.00 0.00 114113.97 22427.88 71458.51 00:38:56.043 { 00:38:56.043 "results": [ 00:38:56.043 { 00:38:56.043 "job": "NVMe0n1", 00:38:56.043 "core_mask": "0x1", 00:38:56.043 "workload": "verify", 00:38:56.043 "status": "finished", 00:38:56.043 "verify_range": { 00:38:56.043 "start": 0, 00:38:56.043 "length": 16384 00:38:56.043 }, 00:38:56.043 "queue_depth": 1024, 00:38:56.043 "io_size": 4096, 00:38:56.043 "runtime": 10.080706, 00:38:56.043 "iops": 8937.766858789453, 00:38:56.043 "mibps": 34.9131517921463, 00:38:56.043 "io_failed": 0, 00:38:56.043 "io_timeout": 0, 00:38:56.043 "avg_latency_us": 114113.9732150108, 00:38:56.043 "min_latency_us": 22427.875555555554, 00:38:56.043 "max_latency_us": 71458.5125925926 00:38:56.043 } 00:38:56.043 ], 00:38:56.043 "core_count": 1 00:38:56.043 } 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 434575 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 434575 ']' 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 434575 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434575 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434575' 00:38:56.043 killing process with pid 434575 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 434575 00:38:56.043 Received shutdown signal, test time was about 10.000000 seconds 00:38:56.043 00:38:56.043 Latency(us) 00:38:56.043 [2024-11-17T10:33:20.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.043 [2024-11-17T10:33:20.701Z] =================================================================================================================== 00:38:56.043 [2024-11-17T10:33:20.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:56.043 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 434575 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:56.301 rmmod nvme_tcp 00:38:56.301 rmmod nvme_fabrics 00:38:56.301 rmmod nvme_keyring 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 434554 ']' 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 434554 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 434554 ']' 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 434554 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434554 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434554' 00:38:56.301 killing process with pid 434554 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 434554 00:38:56.301 11:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 434554 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.560 11:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.462 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:58.462 00:38:58.462 real 0m15.805s 00:38:58.462 user 0m22.007s 00:38:58.462 sys 0m3.199s 00:38:58.462 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.462 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.462 ************************************ 00:38:58.462 END TEST nvmf_queue_depth 00:38:58.462 ************************************ 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:58.721 ************************************ 00:38:58.721 START TEST nvmf_target_multipath 00:38:58.721 ************************************ 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:58.721 * Looking for test storage... 00:38:58.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.721 --rc genhtml_branch_coverage=1 00:38:58.721 --rc genhtml_function_coverage=1 00:38:58.721 --rc genhtml_legend=1 00:38:58.721 --rc geninfo_all_blocks=1 00:38:58.721 --rc geninfo_unexecuted_blocks=1 00:38:58.721 00:38:58.721 ' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.721 --rc genhtml_branch_coverage=1 00:38:58.721 --rc genhtml_function_coverage=1 00:38:58.721 --rc genhtml_legend=1 00:38:58.721 --rc geninfo_all_blocks=1 00:38:58.721 --rc geninfo_unexecuted_blocks=1 00:38:58.721 00:38:58.721 ' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.721 --rc genhtml_branch_coverage=1 00:38:58.721 --rc genhtml_function_coverage=1 00:38:58.721 --rc genhtml_legend=1 00:38:58.721 --rc geninfo_all_blocks=1 00:38:58.721 --rc geninfo_unexecuted_blocks=1 00:38:58.721 00:38:58.721 ' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:58.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.721 --rc genhtml_branch_coverage=1 00:38:58.721 --rc genhtml_function_coverage=1 00:38:58.721 --rc genhtml_legend=1 00:38:58.721 --rc geninfo_all_blocks=1 00:38:58.721 --rc geninfo_unexecuted_blocks=1 00:38:58.721 00:38:58.721 ' 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.721 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.722 11:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:01.318 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.318 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:01.319 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:01.319 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:01.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:01.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.319 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:39:01.320 00:39:01.320 --- 10.0.0.2 ping statistics --- 00:39:01.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.320 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:39:01.320 00:39:01.320 --- 10.0.0.1 ping statistics --- 00:39:01.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.320 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:01.320 only one NIC for nvmf test 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:01.320 rmmod nvme_tcp 00:39:01.320 rmmod nvme_fabrics 00:39:01.320 rmmod nvme_keyring 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:01.320 11:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.308 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:03.309 00:39:03.309 real 0m4.488s 00:39:03.309 user 0m0.896s 00:39:03.309 sys 0m1.595s 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:03.309 ************************************ 00:39:03.309 END TEST nvmf_target_multipath 00:39:03.309 ************************************ 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:03.309 ************************************ 00:39:03.309 START TEST nvmf_zcopy 00:39:03.309 ************************************ 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:03.309 * Looking for test storage... 00:39:03.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.309 --rc genhtml_branch_coverage=1 00:39:03.309 --rc genhtml_function_coverage=1 00:39:03.309 --rc genhtml_legend=1 00:39:03.309 --rc geninfo_all_blocks=1 00:39:03.309 --rc geninfo_unexecuted_blocks=1 00:39:03.309 00:39:03.309 ' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.309 --rc genhtml_branch_coverage=1 00:39:03.309 --rc genhtml_function_coverage=1 00:39:03.309 --rc genhtml_legend=1 00:39:03.309 --rc geninfo_all_blocks=1 00:39:03.309 --rc geninfo_unexecuted_blocks=1 00:39:03.309 00:39:03.309 ' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.309 --rc genhtml_branch_coverage=1 00:39:03.309 --rc genhtml_function_coverage=1 00:39:03.309 --rc genhtml_legend=1 00:39:03.309 --rc geninfo_all_blocks=1 00:39:03.309 --rc geninfo_unexecuted_blocks=1 00:39:03.309 00:39:03.309 ' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:03.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:03.309 --rc genhtml_branch_coverage=1 00:39:03.309 --rc genhtml_function_coverage=1 00:39:03.309 --rc genhtml_legend=1 00:39:03.309 --rc geninfo_all_blocks=1 00:39:03.309 --rc geninfo_unexecuted_blocks=1 00:39:03.309 00:39:03.309 ' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:03.309 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:03.310 11:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:05.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:05.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:05.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:05.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:05.845 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:05.846 11:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:05.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:05.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:39:05.846 00:39:05.846 --- 10.0.0.2 ping statistics --- 00:39:05.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:05.846 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:05.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:05.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:39:05.846 00:39:05.846 --- 10.0.0.1 ping statistics --- 00:39:05.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:05.846 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=439752 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 439752 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 439752 ']' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:05.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.846 [2024-11-17 11:33:30.191121] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:05.846 [2024-11-17 11:33:30.192274] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:05.846 [2024-11-17 11:33:30.192328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:05.846 [2024-11-17 11:33:30.265074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.846 [2024-11-17 11:33:30.309888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:05.846 [2024-11-17 11:33:30.309939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:05.846 [2024-11-17 11:33:30.309953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:05.846 [2024-11-17 11:33:30.309963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:05.846 [2024-11-17 11:33:30.309972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:05.846 [2024-11-17 11:33:30.310560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:05.846 [2024-11-17 11:33:30.395322] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:05.846 [2024-11-17 11:33:30.395641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.846 [2024-11-17 11:33:30.459175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.846 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.847 [2024-11-17 11:33:30.475360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:05.847 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:06.108 malloc0 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:06.108 { 00:39:06.108 "params": { 00:39:06.108 "name": "Nvme$subsystem", 00:39:06.108 "trtype": "$TEST_TRANSPORT", 00:39:06.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:06.108 "adrfam": "ipv4", 00:39:06.108 "trsvcid": "$NVMF_PORT", 00:39:06.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:06.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:06.108 "hdgst": ${hdgst:-false}, 00:39:06.108 "ddgst": ${ddgst:-false} 00:39:06.108 }, 00:39:06.108 "method": "bdev_nvme_attach_controller" 00:39:06.108 } 00:39:06.108 EOF 00:39:06.108 )") 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:06.108 11:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:06.108 "params": { 00:39:06.108 "name": "Nvme1", 00:39:06.108 "trtype": "tcp", 00:39:06.108 "traddr": "10.0.0.2", 00:39:06.108 "adrfam": "ipv4", 00:39:06.108 "trsvcid": "4420", 00:39:06.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:06.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:06.108 "hdgst": false, 00:39:06.108 "ddgst": false 00:39:06.108 }, 00:39:06.108 "method": "bdev_nvme_attach_controller" 00:39:06.108 }' 00:39:06.108 [2024-11-17 11:33:30.559017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:06.108 [2024-11-17 11:33:30.559082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439775 ] 00:39:06.108 [2024-11-17 11:33:30.625022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.108 [2024-11-17 11:33:30.673259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.367 Running I/O for 10 seconds... 00:39:08.691 5683.00 IOPS, 44.40 MiB/s [2024-11-17T10:33:34.289Z] 5738.50 IOPS, 44.83 MiB/s [2024-11-17T10:33:35.224Z] 5748.67 IOPS, 44.91 MiB/s [2024-11-17T10:33:36.162Z] 5752.75 IOPS, 44.94 MiB/s [2024-11-17T10:33:37.102Z] 5763.40 IOPS, 45.03 MiB/s [2024-11-17T10:33:38.044Z] 5769.00 IOPS, 45.07 MiB/s [2024-11-17T10:33:39.424Z] 5779.29 IOPS, 45.15 MiB/s [2024-11-17T10:33:40.358Z] 5780.62 IOPS, 45.16 MiB/s [2024-11-17T10:33:41.297Z] 5783.33 IOPS, 45.18 MiB/s [2024-11-17T10:33:41.297Z] 5782.90 IOPS, 45.18 MiB/s 00:39:16.639 Latency(us) 00:39:16.639 [2024-11-17T10:33:41.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:16.639 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:16.639 Verification LBA range: start 0x0 length 0x1000 00:39:16.639 Nvme1n1 : 10.02 5785.89 45.20 0.00 0.00 22062.75 3640.89 29321.29 00:39:16.639 [2024-11-17T10:33:41.297Z] =================================================================================================================== 00:39:16.639 [2024-11-17T10:33:41.297Z] Total : 5785.89 45.20 0.00 0.00 22062.75 3640.89 29321.29 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=440999 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:16.639 { 00:39:16.639 "params": { 00:39:16.639 "name": "Nvme$subsystem", 00:39:16.639 "trtype": "$TEST_TRANSPORT", 00:39:16.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:16.639 "adrfam": "ipv4", 00:39:16.639 "trsvcid": "$NVMF_PORT", 00:39:16.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:16.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:16.639 "hdgst": ${hdgst:-false}, 00:39:16.639 "ddgst": ${ddgst:-false} 00:39:16.639 }, 00:39:16.639 "method": "bdev_nvme_attach_controller" 00:39:16.639 } 00:39:16.639 EOF 00:39:16.639 )") 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:16.639 [2024-11-17 11:33:41.251135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.639 [2024-11-17 11:33:41.251174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:16.639 11:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:16.639 "params": { 00:39:16.639 "name": "Nvme1", 00:39:16.639 "trtype": "tcp", 00:39:16.639 "traddr": "10.0.0.2", 00:39:16.639 "adrfam": "ipv4", 00:39:16.639 "trsvcid": "4420", 00:39:16.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:16.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:16.639 "hdgst": false, 00:39:16.639 "ddgst": false 00:39:16.639 }, 00:39:16.639 "method": "bdev_nvme_attach_controller" 00:39:16.639 }' 00:39:16.639 [2024-11-17 11:33:41.259062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.639 [2024-11-17 11:33:41.259083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.639 [2024-11-17 11:33:41.267059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.639 [2024-11-17 11:33:41.267080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.639 [2024-11-17 11:33:41.275059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.639 [2024-11-17 11:33:41.275091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.639 [2024-11-17 11:33:41.283069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.639 [2024-11-17 11:33:41.283100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.639 [2024-11-17 11:33:41.291059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.639 [2024-11-17 11:33:41.291079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.639 [2024-11-17 11:33:41.291735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:16.639 [2024-11-17 11:33:41.291800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440999 ] 00:39:16.898 [2024-11-17 11:33:41.299060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.299082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.307057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.307077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.315056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.315076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.323055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.323074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.331056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.331075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.339056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.339075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.347060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.347080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.355056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.355076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.362400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.898 [2024-11-17 11:33:41.363056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.363075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.371100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.371132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.379090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.379122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.387058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.387077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.395057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.395076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.403056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.403076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.410712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.898 [2024-11-17 11:33:41.411057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.411076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.419056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.419075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.427088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.427117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.435099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.435134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.443096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.443130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.451097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.451133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.459104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.459140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.467098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.467135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.475059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.475079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.483100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.483134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.491096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.491130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.499074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.499100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.507057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.507076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.515064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.515087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.523062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.523085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.531061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.531090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.539062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.539084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:16.898 [2024-11-17 11:33:41.547062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:16.898 [2024-11-17 11:33:41.547084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.555061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.555083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.563057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.563077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.571056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.571075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.579057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.579076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.587056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.587075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.595062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.595084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.603057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.603077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.611057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.611077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.619056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.619075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.627056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.627074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.635056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.635075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.643061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.643083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.651057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.651077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.659057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.659076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.667056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.667075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.675056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.675075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.683060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.683085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.691061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.691083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.732767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.732795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.739062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.739097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 Running I/O for 5 seconds... 00:39:17.157 [2024-11-17 11:33:41.747063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.747085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.763341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.763367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.772993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.773019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.788822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.788849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.157 [2024-11-17 11:33:41.807073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.157 [2024-11-17 11:33:41.807098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.816555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.816583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.832707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.832733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.850854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.850881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.860535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.860561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.876236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.876261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.885480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.885519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.900855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.900879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.919400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.919425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.928880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.928907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.944966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.944991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.962747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.962774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.972387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.972412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.988125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.988152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:41.997703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:41.997732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:42.011938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:42.011962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:42.022165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:42.022190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:42.036690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:42.036717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:42.053172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:42.053200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.415 [2024-11-17 11:33:42.070897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.415 [2024-11-17 11:33:42.070924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.080653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.080681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.096681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.096708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.115162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.115186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.124922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.124947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.139375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.139400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.148411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.148437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.164399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.164424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.173926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.173954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.188316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.188341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.197994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.198020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.212656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.212683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.230604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.230630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.240412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.240439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.255853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.255879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.275521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.275554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.285477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.285539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.300063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.300088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.674 [2024-11-17 11:33:42.319005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.674 [2024-11-17 11:33:42.319047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.932 [2024-11-17 11:33:42.329874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.932 [2024-11-17 11:33:42.329903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.344984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.345012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.362859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.362888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.372404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.372432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.388464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.388491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.406954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.406980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.417542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.417580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.433256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.433298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.450958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.450986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.460828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.460864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.476076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.476102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.495482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.495521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.506268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.506294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.519074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.519101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.528426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.528452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.544462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.544486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.563850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.563877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:17.933 [2024-11-17 11:33:42.583252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:17.933 [2024-11-17 11:33:42.583292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.593984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.594010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.608712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.608739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.626566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.626607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.636161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.636187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.652102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.652126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.670872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.670912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.680123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.680148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.695869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.695894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.705639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.705667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.719776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.719802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.729287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.729313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.745397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.745431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 11659.00 IOPS, 91.09 MiB/s [2024-11-17T10:33:42.851Z] [2024-11-17 11:33:42.761296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.761337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.778296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.778322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.788036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.788061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.804134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.804160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.823745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.823771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.833465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.833491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.193 [2024-11-17 11:33:42.848414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.193 [2024-11-17 11:33:42.848439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.865750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.865793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.881276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.881303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.898775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.898803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.908657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.908684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.924608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.924636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.942401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.942426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.453 [2024-11-17 11:33:42.952176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.453 [2024-11-17 11:33:42.952202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:42.968207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:42.968234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:42.978076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:42.978102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:42.991956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:42.991998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.001251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.001277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.015026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.015061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.023862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.023888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.039132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.039173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.048378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.048404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.063751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.063795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.073331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.073356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.087820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.087860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.454 [2024-11-17 11:33:43.107421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.454 [2024-11-17 11:33:43.107448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.117265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.117291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.132612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.132640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.150595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.150622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.160201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.160227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.176155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.176181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.186107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.186133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.199814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.199854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.209274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.209300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.225015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.225042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.242607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.242634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.252374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.252399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.267810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.267861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.277427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.277452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.293362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.293387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.311096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.311124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.320863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.320888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.337344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.337371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.713 [2024-11-17 11:33:43.352987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.713 [2024-11-17 11:33:43.353015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.370959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.370993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.380815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.380841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.396430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.396455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.415762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.415802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.425045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.425071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.441074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.441101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.459218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.459242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.469872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.469898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.482110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.482138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.496627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.496655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.514857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.514897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.524170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.524196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.539854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.539895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.549390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.549416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.565291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.565317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.582984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.583010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.592842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.592868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.608353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.608378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.972 [2024-11-17 11:33:43.627094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:18.972 [2024-11-17 11:33:43.627120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.636683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.636710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.652199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.652225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.661826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.661867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.675316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.675341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.684412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.684453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.700145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.700170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.709715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.709743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.723473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.723497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.733347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.733372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.746887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.746926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 11726.00 IOPS, 91.61 MiB/s [2024-11-17T10:33:43.889Z] [2024-11-17 11:33:43.756431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.756457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.771848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.771888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.791608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.791634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.802459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.802484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.813539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.813579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.829110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.829136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.847230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.847257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.857391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.857418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.231 [2024-11-17 11:33:43.873178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.231 [2024-11-17 11:33:43.873202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.889735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.889763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.905325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.905350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.923313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.923338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.933539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.933581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.949628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.949655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.964363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.964391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.973870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.973895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:43.988235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:43.988274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.007911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.007936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.017372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.017399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.033204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.033230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.051007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.051042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.061225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.061249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.076519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.076558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.096068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.096093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.114228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.114253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.124238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.124279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.490 [2024-11-17 11:33:44.139823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.490 [2024-11-17 11:33:44.139849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.159497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.159546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.180246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.180286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.196384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.196410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.205840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.205868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.220876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.220901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.238087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.238114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.252785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.252813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.271290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.271314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.280640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.280668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.296821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.296861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.315867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.315892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.325136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.325163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.341167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.341202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.357293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.357318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.375047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.375074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.385621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.385647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:19.749 [2024-11-17 11:33:44.398002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:19.749 [2024-11-17 11:33:44.398030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.412475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.412503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.432817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.432842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.448622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.448651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.467717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.467744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.488235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.488261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.504249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.504274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.513744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.513770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.528428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.528453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.545589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.545616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.563275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.563303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.573271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.573311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.588251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.588277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.606450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.606477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.618357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.618386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.628102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.628137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.644240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.644267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.008 [2024-11-17 11:33:44.653932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.008 [2024-11-17 11:33:44.653960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.668543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.668585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.687689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.687729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.697605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.697635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.713585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.713610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.731310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.731334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.741320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.741344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.754556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.754582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 11726.00 IOPS, 91.61 MiB/s [2024-11-17T10:33:44.926Z] [2024-11-17 11:33:44.764406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.764430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.780856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.268 [2024-11-17 11:33:44.780881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.268 [2024-11-17 11:33:44.798641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.798667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.808199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.808224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.824835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.824859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.843612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.843638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.852960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.852986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.868929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.868953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.887199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.887224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.897690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.897727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.911241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.911266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.269 [2024-11-17 11:33:44.920263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.269 [2024-11-17 11:33:44.920290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:44.935971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:44.935996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:44.956177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:44.956203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:44.974175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:44.974200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:44.987622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:44.987649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:44.997307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:44.997346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.011797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.011838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.021265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.021289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.036543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.036570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.046105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.046131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.059915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.059940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.079165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.079190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.088917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.088944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.105606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.105633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.121021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.121059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.137167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.137192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.153131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.153157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.171181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.171206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.529 [2024-11-17 11:33:45.180447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.529 [2024-11-17 11:33:45.180472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.196430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.196455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.215774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.215817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.225218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.225244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.241118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.241144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.257123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.257148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.275205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.275231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.284002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.284028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.300036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.300061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.309460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.309486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.324146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.324172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.343965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.343992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.363046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.363071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.373270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.373310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.388936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.388961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.406621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.406648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.416268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.416292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.430856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.430894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.790 [2024-11-17 11:33:45.440488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:20.790 [2024-11-17 11:33:45.440540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.456532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.456560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.476137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.476163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.495696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.495722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.511724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.511749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.521248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.521273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.535455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.535480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.545456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.545481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.559079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.559105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.568348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.568376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.584253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.584280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.593497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.593532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.608844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.608885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.627484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.627532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.637224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.637263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.652724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.652750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.670606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.670632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.680237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.680262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.696078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.696104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.051 [2024-11-17 11:33:45.705833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.051 [2024-11-17 11:33:45.705859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.719572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.719598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.728754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.728780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.744313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.744339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 11739.25 IOPS, 91.71 MiB/s [2024-11-17T10:33:45.968Z] [2024-11-17 11:33:45.762830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.762857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.774746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.774774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.784290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.784330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.800311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.800335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.819402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.819431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.829718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.829744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.843342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.843368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.852578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.310 [2024-11-17 11:33:45.852605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.310 [2024-11-17 11:33:45.868078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.868103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.877677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.877705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.891097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.891139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.900766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.900794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.916867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.916892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.934691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.934718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.944180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.944222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.311 [2024-11-17 11:33:45.959846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.311 [2024-11-17 11:33:45.959873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:45.969462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:45.969490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:45.984079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:45.984105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:45.993657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:45.993683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.007662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.007689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.017333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.017359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.033097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.033123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.050859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.050886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.061579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.061621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.075612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.075639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.085497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.085522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.099222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.099248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.108822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.108849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.124593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.124620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.143139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.143164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.153335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.153360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.166909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.166937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.176046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.176072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.191652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.191691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.571 [2024-11-17 11:33:46.212003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.571 [2024-11-17 11:33:46.212029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.230307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.230334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.244352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.244379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.253683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.253709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.267303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.267328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.276271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.276296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.291769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.291795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.310862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.310889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.320382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.320408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.336416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.336440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.346038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.346063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.360713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.360740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.379478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.379517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.388793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.388821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.404520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.404552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.423417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.423441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.432519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.432566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.448385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.448411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.466057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.466113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:21.832 [2024-11-17 11:33:46.480472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:21.832 [2024-11-17 11:33:46.480500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.499543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.499570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.520094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.520119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.538750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.538778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.549241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.549265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.562254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.562281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.576569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.576597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.586265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.586292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.600870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.600895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.619327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.619351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.629343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.629368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.643787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.643813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.653594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.653620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.669103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.669128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.685513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.685551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.701028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.701055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.718669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.718696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.728925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.728949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.093 [2024-11-17 11:33:46.744824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.093 [2024-11-17 11:33:46.744851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.352 11737.80 IOPS, 91.70 MiB/s [2024-11-17T10:33:47.010Z] [2024-11-17 11:33:46.762136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.352 [2024-11-17 11:33:46.762163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.352 00:39:22.352 Latency(us) 00:39:22.352 [2024-11-17T10:33:47.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.352 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:22.352 Nvme1n1 : 5.01 11738.98 91.71 0.00 0.00 10890.09 2973.39 17961.72 00:39:22.352 [2024-11-17T10:33:47.010Z] =================================================================================================================== 00:39:22.352 [2024-11-17T10:33:47.010Z] Total : 11738.98 91.71 0.00 0.00 10890.09 2973.39 17961.72 00:39:22.352 [2024-11-17 11:33:46.771066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.771089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.779080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.779104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.787084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.787114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.795144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.795195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.803141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.803191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.811136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.811185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.819131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.819176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.827123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.827171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.835141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.835193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.843138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.843186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.851135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.851178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.859140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.859189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.867141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.867190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.875143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.875193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.883140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.883185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.891136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.891185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.899135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.899182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.907130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.907174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.915104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.915144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.923064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.923086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.931112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.931150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.939128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.939171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.947132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.947177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.955058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.955077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.963058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.963078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 [2024-11-17 11:33:46.971060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.353 [2024-11-17 11:33:46.971079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (440999) - No such process 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 440999 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:22.353 delay0 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:22.353 11:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:22.613 [2024-11-17 11:33:47.138706] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:30.732 [2024-11-17 11:33:54.335193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662850 is same with the state(6) to be set 00:39:30.732 Initializing NVMe Controllers 00:39:30.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:30.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:30.732 Initialization complete. Launching workers. 00:39:30.732 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 21176 00:39:30.732 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21299, failed to submit 118 00:39:30.732 success 21206, unsuccessful 93, failed 0 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:30.732 rmmod nvme_tcp 00:39:30.732 rmmod nvme_fabrics 00:39:30.732 rmmod nvme_keyring 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 439752 ']' 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 439752 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 439752 ']' 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 439752 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439752 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439752' 00:39:30.732 killing process with pid 439752 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 439752 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 439752 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.732 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.733 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.733 11:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:32.108 00:39:32.108 real 0m29.005s 00:39:32.108 user 0m41.575s 00:39:32.108 sys 0m10.179s 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:32.108 ************************************ 00:39:32.108 END TEST nvmf_zcopy 00:39:32.108 ************************************ 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:32.108 ************************************ 00:39:32.108 START TEST nvmf_nmic 00:39:32.108 ************************************ 00:39:32.108 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:32.367 * Looking for test storage... 00:39:32.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.367 --rc genhtml_branch_coverage=1 00:39:32.367 --rc genhtml_function_coverage=1 00:39:32.367 --rc genhtml_legend=1 00:39:32.367 --rc geninfo_all_blocks=1 00:39:32.367 --rc geninfo_unexecuted_blocks=1 00:39:32.367 00:39:32.367 ' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.367 --rc genhtml_branch_coverage=1 00:39:32.367 --rc genhtml_function_coverage=1 00:39:32.367 --rc genhtml_legend=1 00:39:32.367 --rc geninfo_all_blocks=1 00:39:32.367 --rc geninfo_unexecuted_blocks=1 00:39:32.367 00:39:32.367 ' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.367 --rc genhtml_branch_coverage=1 00:39:32.367 --rc genhtml_function_coverage=1 00:39:32.367 --rc genhtml_legend=1 00:39:32.367 --rc geninfo_all_blocks=1 00:39:32.367 --rc geninfo_unexecuted_blocks=1 00:39:32.367 00:39:32.367 ' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:32.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.367 --rc genhtml_branch_coverage=1 00:39:32.367 --rc genhtml_function_coverage=1 00:39:32.367 --rc genhtml_legend=1 00:39:32.367 --rc geninfo_all_blocks=1 00:39:32.367 --rc geninfo_unexecuted_blocks=1 00:39:32.367 00:39:32.367 ' 00:39:32.367 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:32.368 11:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:34.899 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:34.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:34.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:34.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:34.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:34.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:34.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:39:34.900 00:39:34.900 --- 10.0.0.2 ping statistics --- 00:39:34.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.900 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:34.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:34.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:39:34.900 00:39:34.900 --- 10.0.0.1 ping statistics --- 00:39:34.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.900 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:34.900 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=444452 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 444452 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 444452 ']' 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:34.901 [2024-11-17 11:33:59.280723] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:34.901 [2024-11-17 11:33:59.281902] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:34.901 [2024-11-17 11:33:59.281984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.901 [2024-11-17 11:33:59.355725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:34.901 [2024-11-17 11:33:59.406633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.901 [2024-11-17 11:33:59.406703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.901 [2024-11-17 11:33:59.406731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:34.901 [2024-11-17 11:33:59.406743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:34.901 [2024-11-17 11:33:59.406752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.901 [2024-11-17 11:33:59.408280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.901 [2024-11-17 11:33:59.408343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:34.901 [2024-11-17 11:33:59.408411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:34.901 [2024-11-17 11:33:59.408414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.901 [2024-11-17 11:33:59.497984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:34.901 [2024-11-17 11:33:59.498192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:34.901 [2024-11-17 11:33:59.498564] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:34.901 [2024-11-17 11:33:59.499028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:34.901 [2024-11-17 11:33:59.499248] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:34.901 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:34.901 [2024-11-17 11:33:59.549182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.159 Malloc0 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.159 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.160 [2024-11-17 11:33:59.617412] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:35.160 test case1: single bdev can't be used in multiple subsystems 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.160 [2024-11-17 11:33:59.641098] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:35.160 [2024-11-17 11:33:59.641134] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:35.160 [2024-11-17 11:33:59.641152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:35.160 request: 00:39:35.160 { 00:39:35.160 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:35.160 "namespace": { 00:39:35.160 "bdev_name": "Malloc0", 00:39:35.160 "no_auto_visible": false 00:39:35.160 }, 00:39:35.160 "method": "nvmf_subsystem_add_ns", 00:39:35.160 "req_id": 1 00:39:35.160 } 00:39:35.160 Got JSON-RPC error response 00:39:35.160 response: 00:39:35.160 { 00:39:35.160 "code": -32602, 00:39:35.160 "message": "Invalid parameters" 00:39:35.160 } 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:35.160 Adding namespace failed - expected result. 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:35.160 test case2: host connect to nvmf target in multiple paths 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:35.160 [2024-11-17 11:33:59.649187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:35.160 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:35.418 11:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:35.418 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:35.418 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:35.418 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:35.418 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:35.418 11:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:37.952 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:37.953 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:37.953 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:37.953 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:37.953 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:37.953 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:37.953 11:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:37.953 [global] 00:39:37.953 thread=1 00:39:37.953 invalidate=1 00:39:37.953 rw=write 00:39:37.953 time_based=1 00:39:37.953 runtime=1 00:39:37.953 ioengine=libaio 00:39:37.953 direct=1 00:39:37.953 bs=4096 00:39:37.953 iodepth=1 00:39:37.953 norandommap=0 00:39:37.953 numjobs=1 00:39:37.953 00:39:37.953 verify_dump=1 00:39:37.953 verify_backlog=512 00:39:37.953 verify_state_save=0 00:39:37.953 do_verify=1 00:39:37.953 verify=crc32c-intel 00:39:37.953 [job0] 00:39:37.953 filename=/dev/nvme0n1 00:39:37.953 Could not set queue depth (nvme0n1) 00:39:37.953 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.953 fio-3.35 00:39:37.953 Starting 1 thread 00:39:38.890 00:39:38.890 job0: (groupid=0, jobs=1): err= 0: pid=444953: Sun Nov 17 11:34:03 2024 00:39:38.890 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:39:38.890 slat (nsec): min=5700, max=32009, avg=14799.83, stdev=4671.87 00:39:38.890 clat (usec): min=40665, max=41059, avg=40967.53, stdev=75.87 00:39:38.890 lat (usec): min=40671, max=41074, avg=40982.33, stdev=76.55 00:39:38.890 clat percentiles (usec): 00:39:38.890 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:38.890 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:38.890 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:38.890 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:38.890 | 99.99th=[41157] 00:39:38.890 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:39:38.890 slat (nsec): min=5290, max=28137, avg=6261.48, stdev=1560.53 00:39:38.890 clat (usec): min=134, max=264, avg=143.46, stdev= 8.72 00:39:38.890 lat (usec): min=140, max=293, avg=149.73, stdev= 9.44 00:39:38.890 clat percentiles (usec): 00:39:38.890 | 1.00th=[ 137], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 139], 00:39:38.890 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 143], 00:39:38.890 | 70.00th=[ 145], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:39:38.890 | 99.00th=[ 172], 99.50th=[ 190], 99.90th=[ 265], 99.95th=[ 265], 00:39:38.890 | 99.99th=[ 265] 00:39:38.890 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:38.890 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:38.890 lat (usec) : 250=95.51%, 500=0.19% 00:39:38.890 lat (msec) : 50=4.30% 00:39:38.890 cpu : usr=0.10%, sys=0.29%, ctx=535, majf=0, minf=1 00:39:38.890 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.890 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.890 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:38.890 00:39:38.890 Run status group 0 (all jobs): 00:39:38.890 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:39:38.890 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:39:38.890 00:39:38.890 Disk stats (read/write): 00:39:38.890 nvme0n1: ios=70/512, merge=0/0, ticks=839/72, in_queue=911, util=91.38% 00:39:38.890 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:39.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:39.151 rmmod nvme_tcp 00:39:39.151 rmmod nvme_fabrics 00:39:39.151 rmmod nvme_keyring 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 444452 ']' 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 444452 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 444452 ']' 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 444452 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444452 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444452' 00:39:39.151 killing process with pid 444452 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 444452 00:39:39.151 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 444452 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.411 11:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:41.314 00:39:41.314 real 0m9.188s 00:39:41.314 user 0m16.912s 00:39:41.314 sys 0m3.416s 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:41.314 ************************************ 00:39:41.314 END TEST nvmf_nmic 00:39:41.314 ************************************ 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:41.314 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:41.574 ************************************ 00:39:41.574 START TEST nvmf_fio_target 00:39:41.574 ************************************ 00:39:41.574 11:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:41.574 * Looking for test storage... 00:39:41.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:41.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.574 --rc genhtml_branch_coverage=1 00:39:41.574 --rc genhtml_function_coverage=1 00:39:41.574 --rc genhtml_legend=1 00:39:41.574 --rc geninfo_all_blocks=1 00:39:41.574 --rc geninfo_unexecuted_blocks=1 00:39:41.574 00:39:41.574 ' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:41.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.574 --rc genhtml_branch_coverage=1 00:39:41.574 --rc genhtml_function_coverage=1 00:39:41.574 --rc genhtml_legend=1 00:39:41.574 --rc geninfo_all_blocks=1 00:39:41.574 --rc geninfo_unexecuted_blocks=1 00:39:41.574 00:39:41.574 ' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:41.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.574 --rc genhtml_branch_coverage=1 00:39:41.574 --rc genhtml_function_coverage=1 00:39:41.574 --rc genhtml_legend=1 00:39:41.574 --rc geninfo_all_blocks=1 00:39:41.574 --rc geninfo_unexecuted_blocks=1 00:39:41.574 00:39:41.574 ' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:41.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.574 --rc genhtml_branch_coverage=1 00:39:41.574 --rc genhtml_function_coverage=1 00:39:41.574 --rc genhtml_legend=1 00:39:41.574 --rc geninfo_all_blocks=1 00:39:41.574 --rc geninfo_unexecuted_blocks=1 00:39:41.574 00:39:41.574 ' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:41.574 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:41.575 11:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:43.495 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:43.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:43.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:43.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:43.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:43.496 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:43.497 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:43.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:43.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:39:43.758 00:39:43.758 --- 10.0.0.2 ping statistics --- 00:39:43.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.758 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:43.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:43.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:39:43.758 00:39:43.758 --- 10.0.0.1 ping statistics --- 00:39:43.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:43.758 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=447026 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 447026 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 447026 ']' 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:43.758 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:43.758 [2024-11-17 11:34:08.343022] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:43.758 [2024-11-17 11:34:08.344215] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:39:43.758 [2024-11-17 11:34:08.344293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:44.017 [2024-11-17 11:34:08.418353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:44.017 [2024-11-17 11:34:08.466919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:44.017 [2024-11-17 11:34:08.466986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:44.017 [2024-11-17 11:34:08.467015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:44.017 [2024-11-17 11:34:08.467026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:44.017 [2024-11-17 11:34:08.467036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:44.017 [2024-11-17 11:34:08.468648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:44.017 [2024-11-17 11:34:08.468672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:44.017 [2024-11-17 11:34:08.468721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:44.017 [2024-11-17 11:34:08.468725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.017 [2024-11-17 11:34:08.559806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:44.017 [2024-11-17 11:34:08.560028] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:44.017 [2024-11-17 11:34:08.560331] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:44.017 [2024-11-17 11:34:08.560925] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:44.017 [2024-11-17 11:34:08.561172] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:44.017 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:44.017 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:44.017 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:44.017 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:44.017 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:44.018 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:44.018 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:44.276 [2024-11-17 11:34:08.873521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:44.276 11:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:44.842 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:44.842 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:44.842 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:44.842 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:45.409 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:45.409 11:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:45.409 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:45.409 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:45.980 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:46.241 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:46.241 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:46.506 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:46.506 11:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:46.764 11:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:46.764 11:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:47.023 11:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:47.281 11:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:47.281 11:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:47.539 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:47.539 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:47.798 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:48.056 [2024-11-17 11:34:12.573670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.056 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:48.314 11:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:48.572 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:48.831 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:48.831 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:48.831 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:48.831 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:48.831 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:48.831 11:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:50.734 11:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:50.734 [global] 00:39:50.734 thread=1 00:39:50.734 invalidate=1 00:39:50.734 rw=write 00:39:50.734 time_based=1 00:39:50.734 runtime=1 00:39:50.734 ioengine=libaio 00:39:50.734 direct=1 00:39:50.734 bs=4096 00:39:50.734 iodepth=1 00:39:50.734 norandommap=0 00:39:50.734 numjobs=1 00:39:50.734 00:39:50.734 verify_dump=1 00:39:50.734 verify_backlog=512 00:39:50.734 verify_state_save=0 00:39:50.734 do_verify=1 00:39:50.734 verify=crc32c-intel 00:39:50.734 [job0] 00:39:50.734 filename=/dev/nvme0n1 00:39:50.734 [job1] 00:39:50.734 filename=/dev/nvme0n2 00:39:50.734 [job2] 00:39:50.734 filename=/dev/nvme0n3 00:39:50.734 [job3] 00:39:50.734 filename=/dev/nvme0n4 00:39:50.734 Could not set queue depth (nvme0n1) 00:39:50.734 Could not set queue depth (nvme0n2) 00:39:50.734 Could not set queue depth (nvme0n3) 00:39:50.734 Could not set queue depth (nvme0n4) 00:39:50.992 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.992 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.992 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.992 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:50.992 fio-3.35 00:39:50.992 Starting 4 threads 00:39:52.368 00:39:52.368 job0: (groupid=0, jobs=1): err= 0: pid=448020: Sun Nov 17 11:34:16 2024 00:39:52.368 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:39:52.368 slat (nsec): min=6869, max=19116, avg=14031.62, stdev=2614.25 00:39:52.368 clat (usec): min=40914, max=41042, avg=40975.78, stdev=32.76 00:39:52.368 lat (usec): min=40921, max=41056, avg=40989.81, stdev=33.35 00:39:52.368 clat percentiles (usec): 00:39:52.368 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:52.368 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:52.368 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:52.368 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:52.368 | 99.99th=[41157] 00:39:52.368 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:39:52.368 slat (usec): min=6, max=670, avg=10.15, stdev=29.46 00:39:52.369 clat (usec): min=136, max=1100, avg=266.07, stdev=76.46 00:39:52.369 lat (usec): min=144, max=1110, avg=276.22, stdev=83.65 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 141], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 229], 00:39:52.369 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 269], 00:39:52.369 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 404], 00:39:52.369 | 99.00th=[ 515], 99.50th=[ 652], 99.90th=[ 1106], 99.95th=[ 1106], 00:39:52.369 | 99.99th=[ 1106] 00:39:52.369 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:39:52.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:52.369 lat (usec) : 250=35.27%, 500=59.29%, 750=1.13%, 1000=0.19% 00:39:52.369 lat (msec) : 2=0.19%, 50=3.94% 00:39:52.369 cpu : usr=0.00%, sys=0.60%, ctx=535, majf=0, minf=1 00:39:52.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:52.369 job1: (groupid=0, jobs=1): err= 0: pid=448040: Sun Nov 17 11:34:16 2024 00:39:52.369 read: IOPS=162, BW=651KiB/s (667kB/s)(652KiB/1001msec) 00:39:52.369 slat (nsec): min=6066, max=27504, avg=8121.53, stdev=3343.14 00:39:52.369 clat (usec): min=205, max=41046, avg=5223.14, stdev=13404.32 00:39:52.369 lat (usec): min=211, max=41059, avg=5231.26, stdev=13406.55 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 208], 5.00th=[ 210], 10.00th=[ 210], 20.00th=[ 212], 00:39:52.369 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:39:52.369 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[41157], 95.00th=[41157], 00:39:52.369 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:52.369 | 99.99th=[41157] 00:39:52.369 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:39:52.369 slat (usec): min=6, max=1058, avg=10.79, stdev=46.50 00:39:52.369 clat (usec): min=140, max=4215, avg=272.05, stdev=193.68 00:39:52.369 lat (usec): min=147, max=4232, avg=282.84, stdev=202.99 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 145], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 225], 00:39:52.369 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 269], 00:39:52.369 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 400], 00:39:52.369 | 99.00th=[ 603], 99.50th=[ 1020], 99.90th=[ 4228], 99.95th=[ 4228], 00:39:52.369 | 99.99th=[ 4228] 00:39:52.369 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:39:52.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:52.369 lat (usec) : 250=46.81%, 500=48.89%, 750=0.59%, 1000=0.15% 00:39:52.369 lat (msec) : 2=0.44%, 10=0.15%, 50=2.96% 00:39:52.369 cpu : usr=0.40%, sys=0.40%, ctx=677, majf=0, minf=2 00:39:52.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 issued rwts: total=163,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:52.369 job2: (groupid=0, jobs=1): err= 0: pid=448070: Sun Nov 17 11:34:16 2024 00:39:52.369 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:39:52.369 slat (nsec): min=7310, max=19156, avg=13782.45, stdev=2737.14 00:39:52.369 clat (usec): min=344, max=42017, avg=39939.75, stdev=8850.78 00:39:52.369 lat (usec): min=353, max=42030, avg=39953.53, stdev=8851.90 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 347], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:39:52.369 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:52.369 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:52.369 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:52.369 | 99.99th=[42206] 00:39:52.369 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:39:52.369 slat (usec): min=7, max=863, avg=12.25, stdev=37.90 00:39:52.369 clat (usec): min=160, max=952, avg=229.75, stdev=61.75 00:39:52.369 lat (usec): min=169, max=1098, avg=241.99, stdev=72.64 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 206], 00:39:52.369 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:39:52.369 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 293], 00:39:52.369 | 99.00th=[ 433], 99.50th=[ 742], 99.90th=[ 955], 99.95th=[ 955], 00:39:52.369 | 99.99th=[ 955] 00:39:52.369 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:39:52.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:52.369 lat (usec) : 250=84.27%, 500=11.24%, 750=0.19%, 1000=0.37% 00:39:52.369 lat (msec) : 50=3.93% 00:39:52.369 cpu : usr=0.40%, sys=0.60%, ctx=536, majf=0, minf=2 00:39:52.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:52.369 job3: (groupid=0, jobs=1): err= 0: pid=448081: Sun Nov 17 11:34:16 2024 00:39:52.369 read: IOPS=23, BW=95.0KiB/s (97.3kB/s)(96.0KiB/1010msec) 00:39:52.369 slat (nsec): min=7451, max=15087, avg=13503.17, stdev=2089.06 00:39:52.369 clat (usec): min=260, max=41966, avg=37641.29, stdev=11501.34 00:39:52.369 lat (usec): min=268, max=41981, avg=37654.79, stdev=11503.07 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 262], 5.00th=[ 351], 10.00th=[41157], 20.00th=[41157], 00:39:52.369 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:52.369 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:52.369 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:52.369 | 99.99th=[42206] 00:39:52.369 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:39:52.369 slat (nsec): min=8689, max=44602, avg=10359.08, stdev=2582.61 00:39:52.369 clat (usec): min=158, max=286, avg=190.87, stdev=17.76 00:39:52.369 lat (usec): min=168, max=297, avg=201.23, stdev=18.23 00:39:52.369 clat percentiles (usec): 00:39:52.369 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 178], 00:39:52.369 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:39:52.369 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 231], 00:39:52.369 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 285], 99.95th=[ 285], 00:39:52.369 | 99.99th=[ 285] 00:39:52.369 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:39:52.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:52.369 lat (usec) : 250=94.03%, 500=1.87% 00:39:52.369 lat (msec) : 50=4.10% 00:39:52.369 cpu : usr=0.59%, sys=0.40%, ctx=538, majf=0, minf=1 00:39:52.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:52.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:52.369 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:52.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:52.369 00:39:52.369 Run status group 0 (all jobs): 00:39:52.369 READ: bw=911KiB/s (933kB/s), 83.6KiB/s-651KiB/s (85.6kB/s-667kB/s), io=920KiB (942kB), run=1001-1010msec 00:39:52.369 WRITE: bw=8111KiB/s (8306kB/s), 2028KiB/s-2046KiB/s (2076kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1010msec 00:39:52.369 00:39:52.369 Disk stats (read/write): 00:39:52.369 nvme0n1: ios=69/512, merge=0/0, ticks=881/136, in_queue=1017, util=97.39% 00:39:52.369 nvme0n2: ios=68/512, merge=0/0, ticks=887/134, in_queue=1021, util=97.56% 00:39:52.369 nvme0n3: ios=76/512, merge=0/0, ticks=909/112, in_queue=1021, util=97.48% 00:39:52.369 nvme0n4: ios=43/512, merge=0/0, ticks=1686/87, in_queue=1773, util=97.57% 00:39:52.369 11:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:52.369 [global] 00:39:52.369 thread=1 00:39:52.369 invalidate=1 00:39:52.369 rw=randwrite 00:39:52.369 time_based=1 00:39:52.369 runtime=1 00:39:52.369 ioengine=libaio 00:39:52.369 direct=1 00:39:52.369 bs=4096 00:39:52.369 iodepth=1 00:39:52.369 norandommap=0 00:39:52.369 numjobs=1 00:39:52.369 00:39:52.369 verify_dump=1 00:39:52.369 verify_backlog=512 00:39:52.369 verify_state_save=0 00:39:52.369 do_verify=1 00:39:52.369 verify=crc32c-intel 00:39:52.369 [job0] 00:39:52.369 filename=/dev/nvme0n1 00:39:52.369 [job1] 00:39:52.369 filename=/dev/nvme0n2 00:39:52.369 [job2] 00:39:52.369 filename=/dev/nvme0n3 00:39:52.369 [job3] 00:39:52.369 filename=/dev/nvme0n4 00:39:52.369 Could not set queue depth (nvme0n1) 00:39:52.369 Could not set queue depth (nvme0n2) 00:39:52.369 Could not set queue depth (nvme0n3) 00:39:52.369 Could not set queue depth (nvme0n4) 00:39:52.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:52.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:52.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:52.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:52.369 fio-3.35 00:39:52.369 Starting 4 threads 00:39:53.746 00:39:53.746 job0: (groupid=0, jobs=1): err= 0: pid=448321: Sun Nov 17 11:34:18 2024 00:39:53.746 read: IOPS=523, BW=2094KiB/s (2145kB/s)(2176KiB/1039msec) 00:39:53.747 slat (nsec): min=4831, max=55298, avg=27708.94, stdev=9610.12 00:39:53.747 clat (usec): min=195, max=41978, avg=1407.31, stdev=6248.70 00:39:53.747 lat (usec): min=208, max=41993, avg=1435.02, stdev=6246.55 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 243], 5.00th=[ 281], 10.00th=[ 302], 20.00th=[ 359], 00:39:53.747 | 30.00th=[ 404], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 465], 00:39:53.747 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 553], 95.00th=[ 586], 00:39:53.747 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:53.747 | 99.99th=[42206] 00:39:53.747 write: IOPS=985, BW=3942KiB/s (4037kB/s)(4096KiB/1039msec); 0 zone resets 00:39:53.747 slat (nsec): min=6151, max=46508, avg=13105.41, stdev=5640.78 00:39:53.747 clat (usec): min=155, max=483, avg=231.44, stdev=36.06 00:39:53.747 lat (usec): min=164, max=492, avg=244.55, stdev=36.78 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 198], 20.00th=[ 210], 00:39:53.747 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:39:53.747 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 293], 00:39:53.747 | 99.00th=[ 375], 99.50th=[ 412], 99.90th=[ 457], 99.95th=[ 486], 00:39:53.747 | 99.99th=[ 486] 00:39:53.747 bw ( KiB/s): min= 8192, max= 8192, per=45.67%, avg=8192.00, stdev= 0.00, samples=1 00:39:53.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:53.747 lat (usec) : 250=54.78%, 500=39.09%, 750=5.23%, 1000=0.06% 00:39:53.747 lat (msec) : 50=0.83% 00:39:53.747 cpu : usr=1.16%, sys=3.08%, ctx=1569, majf=0, minf=1 00:39:53.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 issued rwts: total=544,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:53.747 job1: (groupid=0, jobs=1): err= 0: pid=448323: Sun Nov 17 11:34:18 2024 00:39:53.747 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:39:53.747 slat (nsec): min=4116, max=40277, avg=10514.21, stdev=5732.12 00:39:53.747 clat (usec): min=160, max=41390, avg=433.66, stdev=2351.49 00:39:53.747 lat (usec): min=166, max=41396, avg=444.17, stdev=2351.62 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 186], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 219], 00:39:53.747 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 269], 00:39:53.747 | 70.00th=[ 297], 80.00th=[ 375], 90.00th=[ 465], 95.00th=[ 529], 00:39:53.747 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:39:53.747 | 99.99th=[41157] 00:39:53.747 write: IOPS=1585, BW=6342KiB/s (6494kB/s)(6348KiB/1001msec); 0 zone resets 00:39:53.747 slat (nsec): min=5136, max=37832, avg=12155.58, stdev=5427.62 00:39:53.747 clat (usec): min=123, max=793, avg=181.65, stdev=41.40 00:39:53.747 lat (usec): min=128, max=830, avg=193.80, stdev=43.80 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:39:53.747 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 180], 00:39:53.747 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 227], 95.00th=[ 249], 00:39:53.747 | 99.00th=[ 343], 99.50th=[ 379], 99.90th=[ 502], 99.95th=[ 791], 00:39:53.747 | 99.99th=[ 791] 00:39:53.747 bw ( KiB/s): min= 8192, max= 8192, per=45.67%, avg=8192.00, stdev= 0.00, samples=1 00:39:53.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:53.747 lat (usec) : 250=74.03%, 500=22.19%, 750=3.55%, 1000=0.03% 00:39:53.747 lat (msec) : 20=0.03%, 50=0.16% 00:39:53.747 cpu : usr=2.50%, sys=3.20%, ctx=3123, majf=0, minf=2 00:39:53.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 issued rwts: total=1536,1587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:53.747 job2: (groupid=0, jobs=1): err= 0: pid=448324: Sun Nov 17 11:34:18 2024 00:39:53.747 read: IOPS=521, BW=2087KiB/s (2137kB/s)(2108KiB/1010msec) 00:39:53.747 slat (nsec): min=9534, max=38852, avg=19114.37, stdev=3292.39 00:39:53.747 clat (usec): min=258, max=42252, avg=1432.46, stdev=6671.13 00:39:53.747 lat (usec): min=276, max=42281, avg=1451.57, stdev=6670.45 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 265], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], 00:39:53.747 | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:39:53.747 | 70.00th=[ 338], 80.00th=[ 343], 90.00th=[ 347], 95.00th=[ 351], 00:39:53.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:53.747 | 99.99th=[42206] 00:39:53.747 write: IOPS=1013, BW=4055KiB/s (4153kB/s)(4096KiB/1010msec); 0 zone resets 00:39:53.747 slat (nsec): min=8789, max=52575, avg=19751.00, stdev=7107.52 00:39:53.747 clat (usec): min=165, max=516, avg=211.39, stdev=25.40 00:39:53.747 lat (usec): min=175, max=533, avg=231.14, stdev=29.49 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 190], 00:39:53.747 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 217], 00:39:53.747 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 245], 00:39:53.747 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 375], 99.95th=[ 519], 00:39:53.747 | 99.99th=[ 519] 00:39:53.747 bw ( KiB/s): min= 8192, max= 8192, per=45.67%, avg=8192.00, stdev= 0.00, samples=1 00:39:53.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:53.747 lat (usec) : 250=63.77%, 500=35.20%, 750=0.13% 00:39:53.747 lat (msec) : 50=0.90% 00:39:53.747 cpu : usr=1.68%, sys=4.26%, ctx=1552, majf=0, minf=1 00:39:53.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:53.747 job3: (groupid=0, jobs=1): err= 0: pid=448325: Sun Nov 17 11:34:18 2024 00:39:53.747 read: IOPS=590, BW=2363KiB/s (2419kB/s)(2372KiB/1004msec) 00:39:53.747 slat (nsec): min=4533, max=66376, avg=24684.80, stdev=12141.28 00:39:53.747 clat (usec): min=201, max=42010, avg=1220.57, stdev=5752.09 00:39:53.747 lat (usec): min=213, max=42022, avg=1245.25, stdev=5750.56 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 277], 00:39:53.747 | 30.00th=[ 322], 40.00th=[ 388], 50.00th=[ 420], 60.00th=[ 453], 00:39:53.747 | 70.00th=[ 469], 80.00th=[ 490], 90.00th=[ 537], 95.00th=[ 562], 00:39:53.747 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:53.747 | 99.99th=[42206] 00:39:53.747 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:39:53.747 slat (nsec): min=6514, max=48419, avg=14345.03, stdev=6138.50 00:39:53.747 clat (usec): min=160, max=854, avg=236.95, stdev=48.69 00:39:53.747 lat (usec): min=178, max=863, avg=251.30, stdev=49.55 00:39:53.747 clat percentiles (usec): 00:39:53.747 | 1.00th=[ 176], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:39:53.747 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:39:53.747 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 277], 95.00th=[ 310], 00:39:53.747 | 99.00th=[ 412], 99.50th=[ 441], 99.90th=[ 766], 99.95th=[ 857], 00:39:53.747 | 99.99th=[ 857] 00:39:53.747 bw ( KiB/s): min= 8192, max= 8192, per=45.67%, avg=8192.00, stdev= 0.00, samples=1 00:39:53.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:53.747 lat (usec) : 250=57.02%, 500=36.61%, 750=5.50%, 1000=0.12% 00:39:53.747 lat (msec) : 50=0.74% 00:39:53.747 cpu : usr=1.50%, sys=2.99%, ctx=1618, majf=0, minf=1 00:39:53.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:53.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:53.747 issued rwts: total=593,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:53.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:53.747 00:39:53.747 Run status group 0 (all jobs): 00:39:53.747 READ: bw=12.0MiB/s (12.6MB/s), 2087KiB/s-6138KiB/s (2137kB/s-6285kB/s), io=12.5MiB (13.1MB), run=1001-1039msec 00:39:53.747 WRITE: bw=17.5MiB/s (18.4MB/s), 3942KiB/s-6342KiB/s (4037kB/s-6494kB/s), io=18.2MiB (19.1MB), run=1001-1039msec 00:39:53.747 00:39:53.747 Disk stats (read/write): 00:39:53.747 nvme0n1: ios=558/1024, merge=0/0, ticks=769/230, in_queue=999, util=99.10% 00:39:53.747 nvme0n2: ios=1083/1536, merge=0/0, ticks=487/269, in_queue=756, util=83.66% 00:39:53.747 nvme0n3: ios=555/1024, merge=0/0, ticks=693/207, in_queue=900, util=98.70% 00:39:53.747 nvme0n4: ios=645/1024, merge=0/0, ticks=1048/237, in_queue=1285, util=98.25% 00:39:53.747 11:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:53.747 [global] 00:39:53.747 thread=1 00:39:53.747 invalidate=1 00:39:53.747 rw=write 00:39:53.747 time_based=1 00:39:53.747 runtime=1 00:39:53.747 ioengine=libaio 00:39:53.747 direct=1 00:39:53.747 bs=4096 00:39:53.747 iodepth=128 00:39:53.747 norandommap=0 00:39:53.747 numjobs=1 00:39:53.747 00:39:53.747 verify_dump=1 00:39:53.747 verify_backlog=512 00:39:53.747 verify_state_save=0 00:39:53.747 do_verify=1 00:39:53.747 verify=crc32c-intel 00:39:53.747 [job0] 00:39:53.747 filename=/dev/nvme0n1 00:39:53.747 [job1] 00:39:53.747 filename=/dev/nvme0n2 00:39:53.747 [job2] 00:39:53.747 filename=/dev/nvme0n3 00:39:53.747 [job3] 00:39:53.747 filename=/dev/nvme0n4 00:39:53.747 Could not set queue depth (nvme0n1) 00:39:53.747 Could not set queue depth (nvme0n2) 00:39:53.747 Could not set queue depth (nvme0n3) 00:39:53.747 Could not set queue depth (nvme0n4) 00:39:54.006 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:54.006 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:54.006 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:54.006 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:54.006 fio-3.35 00:39:54.006 Starting 4 threads 00:39:55.382 00:39:55.382 job0: (groupid=0, jobs=1): err= 0: pid=448550: Sun Nov 17 11:34:19 2024 00:39:55.382 read: IOPS=4105, BW=16.0MiB/s (16.8MB/s)(16.2MiB/1009msec) 00:39:55.382 slat (usec): min=2, max=24117, avg=117.04, stdev=965.13 00:39:55.382 clat (usec): min=713, max=50046, avg=14496.80, stdev=5005.35 00:39:55.382 lat (usec): min=5320, max=50052, avg=14613.83, stdev=5085.36 00:39:55.382 clat percentiles (usec): 00:39:55.382 | 1.00th=[ 6783], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11338], 00:39:55.382 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12911], 60.00th=[14353], 00:39:55.382 | 70.00th=[15533], 80.00th=[17171], 90.00th=[20579], 95.00th=[24249], 00:39:55.382 | 99.00th=[32900], 99.50th=[36439], 99.90th=[50070], 99.95th=[50070], 00:39:55.382 | 99.99th=[50070] 00:39:55.382 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:39:55.382 slat (usec): min=3, max=11848, avg=98.82, stdev=772.25 00:39:55.382 clat (usec): min=609, max=74000, avg=14748.07, stdev=8967.11 00:39:55.382 lat (usec): min=623, max=74013, avg=14846.89, stdev=9005.29 00:39:55.382 clat percentiles (usec): 00:39:55.382 | 1.00th=[ 2900], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[10159], 00:39:55.382 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12256], 60.00th=[12780], 00:39:55.382 | 70.00th=[15008], 80.00th=[16581], 90.00th=[24773], 95.00th=[29230], 00:39:55.382 | 99.00th=[64750], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:39:55.382 | 99.99th=[73925] 00:39:55.383 bw ( KiB/s): min=15736, max=20472, per=25.03%, avg=18104.00, stdev=3348.86, samples=2 00:39:55.383 iops : min= 3934, max= 5118, avg=4526.00, stdev=837.21, samples=2 00:39:55.383 lat (usec) : 750=0.06%, 1000=0.01% 00:39:55.383 lat (msec) : 2=0.17%, 4=0.42%, 10=12.16%, 20=72.86%, 50=13.28% 00:39:55.383 lat (msec) : 100=1.04% 00:39:55.383 cpu : usr=3.97%, sys=5.06%, ctx=240, majf=0, minf=1 00:39:55.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:55.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:55.383 issued rwts: total=4142,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:55.383 job1: (groupid=0, jobs=1): err= 0: pid=448551: Sun Nov 17 11:34:19 2024 00:39:55.383 read: IOPS=5134, BW=20.1MiB/s (21.0MB/s)(20.2MiB/1006msec) 00:39:55.383 slat (usec): min=2, max=11247, avg=95.49, stdev=750.16 00:39:55.383 clat (usec): min=2573, max=27254, avg=12300.61, stdev=3141.39 00:39:55.383 lat (usec): min=3555, max=27260, avg=12396.10, stdev=3195.10 00:39:55.383 clat percentiles (usec): 00:39:55.383 | 1.00th=[ 6390], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10290], 00:39:55.383 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[11994], 00:39:55.383 | 70.00th=[12518], 80.00th=[13566], 90.00th=[17171], 95.00th=[19268], 00:39:55.383 | 99.00th=[22938], 99.50th=[24511], 99.90th=[26346], 99.95th=[27132], 00:39:55.383 | 99.99th=[27132] 00:39:55.383 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:39:55.383 slat (usec): min=3, max=10573, avg=81.09, stdev=616.80 00:39:55.383 clat (usec): min=1121, max=27170, avg=11378.49, stdev=2835.37 00:39:55.383 lat (usec): min=1130, max=27178, avg=11459.57, stdev=2876.78 00:39:55.383 clat percentiles (usec): 00:39:55.383 | 1.00th=[ 3851], 5.00th=[ 6259], 10.00th=[ 7177], 20.00th=[ 9765], 00:39:55.383 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:39:55.383 | 70.00th=[12518], 80.00th=[13435], 90.00th=[14877], 95.00th=[16319], 00:39:55.383 | 99.00th=[18744], 99.50th=[20055], 99.90th=[21365], 99.95th=[23725], 00:39:55.383 | 99.99th=[27132] 00:39:55.383 bw ( KiB/s): min=21168, max=23224, per=30.69%, avg=22196.00, stdev=1453.81, samples=2 00:39:55.383 iops : min= 5292, max= 5806, avg=5549.00, stdev=363.45, samples=2 00:39:55.383 lat (msec) : 2=0.02%, 4=0.74%, 10=17.63%, 20=79.76%, 50=1.84% 00:39:55.383 cpu : usr=6.77%, sys=10.35%, ctx=344, majf=0, minf=1 00:39:55.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:39:55.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:55.383 issued rwts: total=5165,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:55.383 job2: (groupid=0, jobs=1): err= 0: pid=448552: Sun Nov 17 11:34:19 2024 00:39:55.383 read: IOPS=3312, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1006msec) 00:39:55.383 slat (usec): min=2, max=25140, avg=154.81, stdev=1122.54 00:39:55.383 clat (usec): min=849, max=65552, avg=18960.39, stdev=11789.30 00:39:55.383 lat (usec): min=6355, max=65567, avg=19115.20, stdev=11865.34 00:39:55.383 clat percentiles (usec): 00:39:55.383 | 1.00th=[ 7177], 5.00th=[11469], 10.00th=[12256], 20.00th=[13435], 00:39:55.383 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14222], 60.00th=[14746], 00:39:55.383 | 70.00th=[16319], 80.00th=[19792], 90.00th=[37487], 95.00th=[49021], 00:39:55.383 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:39:55.383 | 99.99th=[65799] 00:39:55.383 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:39:55.383 slat (usec): min=3, max=27162, avg=129.00, stdev=758.12 00:39:55.383 clat (usec): min=9659, max=49739, avg=16884.96, stdev=5406.93 00:39:55.383 lat (usec): min=9672, max=49760, avg=17013.95, stdev=5457.53 00:39:55.383 clat percentiles (usec): 00:39:55.383 | 1.00th=[ 9896], 5.00th=[11994], 10.00th=[12518], 20.00th=[13829], 00:39:55.383 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:39:55.383 | 70.00th=[17171], 80.00th=[20841], 90.00th=[24773], 95.00th=[25560], 00:39:55.383 | 99.00th=[36439], 99.50th=[49546], 99.90th=[49546], 99.95th=[49546], 00:39:55.383 | 99.99th=[49546] 00:39:55.383 bw ( KiB/s): min=12288, max=16384, per=19.82%, avg=14336.00, stdev=2896.31, samples=2 00:39:55.383 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:39:55.383 lat (usec) : 1000=0.01% 00:39:55.383 lat (msec) : 10=1.60%, 20=77.39%, 50=18.68%, 100=2.31% 00:39:55.383 cpu : usr=3.38%, sys=5.27%, ctx=384, majf=0, minf=1 00:39:55.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:39:55.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:55.383 issued rwts: total=3332,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:55.383 job3: (groupid=0, jobs=1): err= 0: pid=448553: Sun Nov 17 11:34:19 2024 00:39:55.383 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:39:55.383 slat (usec): min=2, max=11123, avg=114.34, stdev=707.65 00:39:55.383 clat (usec): min=3247, max=36466, avg=14210.80, stdev=3668.55 00:39:55.383 lat (usec): min=3265, max=36474, avg=14325.15, stdev=3709.99 00:39:55.383 clat percentiles (usec): 00:39:55.383 | 1.00th=[ 6587], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11600], 00:39:55.383 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14484], 00:39:55.383 | 70.00th=[15270], 80.00th=[16188], 90.00th=[18220], 95.00th=[19792], 00:39:55.383 | 99.00th=[28705], 99.50th=[32900], 99.90th=[36439], 99.95th=[36439], 00:39:55.383 | 99.99th=[36439] 00:39:55.383 write: IOPS=4386, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1008msec); 0 zone resets 00:39:55.383 slat (usec): min=3, max=8946, avg=111.68, stdev=538.25 00:39:55.383 clat (usec): min=1516, max=53009, avg=15681.78, stdev=7631.54 00:39:55.383 lat (usec): min=5454, max=53027, avg=15793.46, stdev=7681.78 00:39:55.383 clat percentiles (usec): 00:39:55.383 | 1.00th=[ 6521], 5.00th=[ 8356], 10.00th=[11076], 20.00th=[12125], 00:39:55.383 | 30.00th=[12780], 40.00th=[13435], 50.00th=[14091], 60.00th=[14222], 00:39:55.383 | 70.00th=[14746], 80.00th=[16581], 90.00th=[20841], 95.00th=[34866], 00:39:55.383 | 99.00th=[49021], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:39:55.383 | 99.99th=[53216] 00:39:55.383 bw ( KiB/s): min=15864, max=18488, per=23.75%, avg=17176.00, stdev=1855.45, samples=2 00:39:55.383 iops : min= 3966, max= 4622, avg=4294.00, stdev=463.86, samples=2 00:39:55.383 lat (msec) : 2=0.01%, 4=0.14%, 10=7.99%, 20=84.33%, 50=7.11% 00:39:55.383 lat (msec) : 100=0.41% 00:39:55.383 cpu : usr=5.96%, sys=8.74%, ctx=526, majf=0, minf=1 00:39:55.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:39:55.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:55.383 issued rwts: total=4096,4422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:55.383 00:39:55.383 Run status group 0 (all jobs): 00:39:55.383 READ: bw=64.8MiB/s (67.9MB/s), 12.9MiB/s-20.1MiB/s (13.6MB/s-21.0MB/s), io=65.4MiB (68.5MB), run=1006-1009msec 00:39:55.383 WRITE: bw=70.6MiB/s (74.1MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-22.9MB/s), io=71.3MiB (74.7MB), run=1006-1009msec 00:39:55.383 00:39:55.383 Disk stats (read/write): 00:39:55.383 nvme0n1: ios=3887/4096, merge=0/0, ticks=47792/44623, in_queue=92415, util=93.69% 00:39:55.383 nvme0n2: ios=4470/4608, merge=0/0, ticks=52882/50443, in_queue=103325, util=98.88% 00:39:55.383 nvme0n3: ios=2986/3072, merge=0/0, ticks=21512/15184, in_queue=36696, util=98.02% 00:39:55.383 nvme0n4: ios=3422/3584, merge=0/0, ticks=24904/28845, in_queue=53749, util=99.48% 00:39:55.383 11:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:55.383 [global] 00:39:55.383 thread=1 00:39:55.383 invalidate=1 00:39:55.383 rw=randwrite 00:39:55.383 time_based=1 00:39:55.383 runtime=1 00:39:55.383 ioengine=libaio 00:39:55.383 direct=1 00:39:55.383 bs=4096 00:39:55.383 iodepth=128 00:39:55.383 norandommap=0 00:39:55.383 numjobs=1 00:39:55.383 00:39:55.383 verify_dump=1 00:39:55.383 verify_backlog=512 00:39:55.383 verify_state_save=0 00:39:55.383 do_verify=1 00:39:55.383 verify=crc32c-intel 00:39:55.383 [job0] 00:39:55.383 filename=/dev/nvme0n1 00:39:55.383 [job1] 00:39:55.383 filename=/dev/nvme0n2 00:39:55.383 [job2] 00:39:55.383 filename=/dev/nvme0n3 00:39:55.383 [job3] 00:39:55.383 filename=/dev/nvme0n4 00:39:55.383 Could not set queue depth (nvme0n1) 00:39:55.383 Could not set queue depth (nvme0n2) 00:39:55.383 Could not set queue depth (nvme0n3) 00:39:55.383 Could not set queue depth (nvme0n4) 00:39:55.383 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:55.383 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:55.383 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:55.383 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:55.383 fio-3.35 00:39:55.383 Starting 4 threads 00:39:56.761 00:39:56.761 job0: (groupid=0, jobs=1): err= 0: pid=448783: Sun Nov 17 11:34:21 2024 00:39:56.761 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:39:56.761 slat (usec): min=2, max=11395, avg=97.45, stdev=751.01 00:39:56.761 clat (usec): min=4329, max=30636, avg=13345.85, stdev=3583.14 00:39:56.761 lat (usec): min=4334, max=30640, avg=13443.30, stdev=3639.53 00:39:56.761 clat percentiles (usec): 00:39:56.761 | 1.00th=[ 4817], 5.00th=[ 7898], 10.00th=[ 9372], 20.00th=[11207], 00:39:56.761 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[13829], 00:39:56.761 | 70.00th=[14746], 80.00th=[15926], 90.00th=[17695], 95.00th=[19530], 00:39:56.761 | 99.00th=[24249], 99.50th=[26084], 99.90th=[30540], 99.95th=[30540], 00:39:56.761 | 99.99th=[30540] 00:39:56.761 write: IOPS=4778, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1007msec); 0 zone resets 00:39:56.761 slat (usec): min=3, max=14422, avg=101.78, stdev=799.32 00:39:56.761 clat (usec): min=228, max=45883, avg=13644.64, stdev=6209.42 00:39:56.761 lat (usec): min=318, max=45888, avg=13746.42, stdev=6271.29 00:39:56.761 clat percentiles (usec): 00:39:56.761 | 1.00th=[ 2073], 5.00th=[ 6325], 10.00th=[ 8356], 20.00th=[ 9765], 00:39:56.761 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11994], 60.00th=[13173], 00:39:56.761 | 70.00th=[15401], 80.00th=[16712], 90.00th=[20841], 95.00th=[24773], 00:39:56.761 | 99.00th=[38011], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:39:56.761 | 99.99th=[45876] 00:39:56.761 bw ( KiB/s): min=17040, max=20440, per=27.82%, avg=18740.00, stdev=2404.16, samples=2 00:39:56.761 iops : min= 4260, max= 5110, avg=4685.00, stdev=601.04, samples=2 00:39:56.761 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.38% 00:39:56.761 lat (msec) : 2=0.04%, 4=0.64%, 10=16.46%, 20=72.88%, 50=9.54% 00:39:56.761 cpu : usr=3.08%, sys=3.58%, ctx=338, majf=0, minf=1 00:39:56.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:56.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:56.761 issued rwts: total=4608,4812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:56.761 job1: (groupid=0, jobs=1): err= 0: pid=448784: Sun Nov 17 11:34:21 2024 00:39:56.761 read: IOPS=2617, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1008msec) 00:39:56.761 slat (usec): min=2, max=24651, avg=203.51, stdev=1499.82 00:39:56.761 clat (usec): min=2439, max=87891, avg=22586.71, stdev=15695.76 00:39:56.761 lat (usec): min=7612, max=87897, avg=22790.22, stdev=15830.13 00:39:56.761 clat percentiles (usec): 00:39:56.761 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:39:56.761 | 30.00th=[12125], 40.00th=[13173], 50.00th=[14877], 60.00th=[19268], 00:39:56.761 | 70.00th=[25035], 80.00th=[35390], 90.00th=[44827], 95.00th=[56361], 00:39:56.761 | 99.00th=[65799], 99.50th=[83362], 99.90th=[87557], 99.95th=[87557], 00:39:56.761 | 99.99th=[87557] 00:39:56.761 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:39:56.761 slat (usec): min=3, max=25843, avg=146.88, stdev=1035.32 00:39:56.761 clat (usec): min=1405, max=113788, avg=22248.63, stdev=21985.22 00:39:56.761 lat (usec): min=1411, max=113802, avg=22395.50, stdev=22103.33 00:39:56.761 clat percentiles (msec): 00:39:56.761 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 12], 00:39:56.761 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:39:56.761 | 70.00th=[ 14], 80.00th=[ 33], 90.00th=[ 47], 95.00th=[ 80], 00:39:56.761 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 114], 00:39:56.761 | 99.99th=[ 114] 00:39:56.761 bw ( KiB/s): min= 7792, max=16384, per=17.94%, avg=12088.00, stdev=6075.46, samples=2 00:39:56.761 iops : min= 1948, max= 4096, avg=3022.00, stdev=1518.87, samples=2 00:39:56.761 lat (msec) : 2=0.25%, 4=0.54%, 10=7.02%, 20=61.42%, 50=21.51% 00:39:56.761 lat (msec) : 100=8.32%, 250=0.95% 00:39:56.761 cpu : usr=2.38%, sys=3.18%, ctx=288, majf=0, minf=1 00:39:56.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:56.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:56.761 issued rwts: total=2638,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:56.761 job2: (groupid=0, jobs=1): err= 0: pid=448785: Sun Nov 17 11:34:21 2024 00:39:56.761 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:39:56.761 slat (usec): min=3, max=14380, avg=109.86, stdev=919.71 00:39:56.761 clat (usec): min=9072, max=29038, avg=14128.93, stdev=3370.72 00:39:56.761 lat (usec): min=9076, max=38875, avg=14238.79, stdev=3479.28 00:39:56.761 clat percentiles (usec): 00:39:56.761 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11207], 20.00th=[11994], 00:39:56.761 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13435], 00:39:56.761 | 70.00th=[14615], 80.00th=[15795], 90.00th=[19530], 95.00th=[21365], 00:39:56.761 | 99.00th=[24511], 99.50th=[26608], 99.90th=[28705], 99.95th=[28967], 00:39:56.761 | 99.99th=[28967] 00:39:56.761 write: IOPS=4848, BW=18.9MiB/s (19.9MB/s)(19.1MiB/1007msec); 0 zone resets 00:39:56.761 slat (usec): min=4, max=11119, avg=94.60, stdev=715.36 00:39:56.761 clat (usec): min=1189, max=25247, avg=12844.16, stdev=2935.54 00:39:56.761 lat (usec): min=1214, max=25257, avg=12938.76, stdev=2981.16 00:39:56.761 clat percentiles (usec): 00:39:56.761 | 1.00th=[ 3982], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[11076], 00:39:56.761 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:39:56.761 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[18220], 00:39:56.761 | 99.00th=[20317], 99.50th=[23725], 99.90th=[24511], 99.95th=[25297], 00:39:56.761 | 99.99th=[25297] 00:39:56.761 bw ( KiB/s): min=17560, max=20480, per=28.23%, avg=19020.00, stdev=2064.75, samples=2 00:39:56.761 iops : min= 4390, max= 5120, avg=4755.00, stdev=516.19, samples=2 00:39:56.761 lat (msec) : 2=0.15%, 4=0.38%, 10=9.10%, 20=85.16%, 50=5.21% 00:39:56.761 cpu : usr=3.68%, sys=6.66%, ctx=332, majf=0, minf=2 00:39:56.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:56.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:56.761 issued rwts: total=4608,4882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:56.762 job3: (groupid=0, jobs=1): err= 0: pid=448786: Sun Nov 17 11:34:21 2024 00:39:56.762 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:39:56.762 slat (usec): min=2, max=5561, avg=111.70, stdev=615.03 00:39:56.762 clat (usec): min=7932, max=23140, avg=14410.21, stdev=1576.93 00:39:56.762 lat (usec): min=7935, max=23153, avg=14521.92, stdev=1618.41 00:39:56.762 clat percentiles (usec): 00:39:56.762 | 1.00th=[10683], 5.00th=[11469], 10.00th=[12518], 20.00th=[13566], 00:39:56.762 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14222], 60.00th=[14484], 00:39:56.762 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16581], 95.00th=[17433], 00:39:56.762 | 99.00th=[18220], 99.50th=[18744], 99.90th=[20317], 99.95th=[20841], 00:39:56.762 | 99.99th=[23200] 00:39:56.762 write: IOPS=4195, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec); 0 zone resets 00:39:56.762 slat (usec): min=3, max=11000, avg=122.54, stdev=732.21 00:39:56.762 clat (usec): min=3682, max=42057, avg=16044.96, stdev=5211.89 00:39:56.762 lat (usec): min=3702, max=42075, avg=16167.49, stdev=5282.31 00:39:56.762 clat percentiles (usec): 00:39:56.762 | 1.00th=[ 8586], 5.00th=[12256], 10.00th=[13304], 20.00th=[13566], 00:39:56.762 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:39:56.762 | 70.00th=[14877], 80.00th=[16909], 90.00th=[24249], 95.00th=[31065], 00:39:56.762 | 99.00th=[32113], 99.50th=[32113], 99.90th=[41157], 99.95th=[41157], 00:39:56.762 | 99.99th=[42206] 00:39:56.762 bw ( KiB/s): min=16384, max=16440, per=24.36%, avg=16412.00, stdev=39.60, samples=2 00:39:56.762 iops : min= 4096, max= 4110, avg=4103.00, stdev= 9.90, samples=2 00:39:56.762 lat (msec) : 4=0.04%, 10=0.85%, 20=93.38%, 50=5.73% 00:39:56.762 cpu : usr=3.09%, sys=5.28%, ctx=377, majf=0, minf=1 00:39:56.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:56.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:56.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:56.762 issued rwts: total=4096,4212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:56.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:56.762 00:39:56.762 Run status group 0 (all jobs): 00:39:56.762 READ: bw=61.8MiB/s (64.8MB/s), 10.2MiB/s-17.9MiB/s (10.7MB/s-18.7MB/s), io=62.3MiB (65.3MB), run=1004-1008msec 00:39:56.762 WRITE: bw=65.8MiB/s (69.0MB/s), 11.9MiB/s-18.9MiB/s (12.5MB/s-19.9MB/s), io=66.3MiB (69.5MB), run=1004-1008msec 00:39:56.762 00:39:56.762 Disk stats (read/write): 00:39:56.762 nvme0n1: ios=3752/4096, merge=0/0, ticks=40089/40970, in_queue=81059, util=100.00% 00:39:56.762 nvme0n2: ios=2414/2560, merge=0/0, ticks=30569/30229, in_queue=60798, util=91.05% 00:39:56.762 nvme0n3: ios=3885/4096, merge=0/0, ticks=53239/50825, in_queue=104064, util=90.57% 00:39:56.762 nvme0n4: ios=3324/3584, merge=0/0, ticks=14744/18192, in_queue=32936, util=89.52% 00:39:56.762 11:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:56.762 11:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=448932 00:39:56.762 11:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:56.762 11:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:56.762 [global] 00:39:56.762 thread=1 00:39:56.762 invalidate=1 00:39:56.762 rw=read 00:39:56.762 time_based=1 00:39:56.762 runtime=10 00:39:56.762 ioengine=libaio 00:39:56.762 direct=1 00:39:56.762 bs=4096 00:39:56.762 iodepth=1 00:39:56.762 norandommap=1 00:39:56.762 numjobs=1 00:39:56.762 00:39:56.762 [job0] 00:39:56.762 filename=/dev/nvme0n1 00:39:56.762 [job1] 00:39:56.762 filename=/dev/nvme0n2 00:39:56.762 [job2] 00:39:56.762 filename=/dev/nvme0n3 00:39:56.762 [job3] 00:39:56.762 filename=/dev/nvme0n4 00:39:56.762 Could not set queue depth (nvme0n1) 00:39:56.762 Could not set queue depth (nvme0n2) 00:39:56.762 Could not set queue depth (nvme0n3) 00:39:56.762 Could not set queue depth (nvme0n4) 00:39:57.022 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.022 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.022 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.022 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:57.022 fio-3.35 00:39:57.022 Starting 4 threads 00:39:59.556 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:00.122 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:00.122 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=290816, buflen=4096 00:40:00.122 fio: pid=449131, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:00.381 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:00.381 11:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:00.381 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37601280, buflen=4096 00:40:00.381 fio: pid=449130, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:00.639 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:00.639 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:00.639 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7335936, buflen=4096 00:40:00.639 fio: pid=449128, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:00.897 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:00.897 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:00.897 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4460544, buflen=4096 00:40:00.897 fio: pid=449129, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:00.897 00:40:00.897 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449128: Sun Nov 17 11:34:25 2024 00:40:00.897 read: IOPS=512, BW=2047KiB/s (2096kB/s)(7164KiB/3500msec) 00:40:00.897 slat (usec): min=4, max=18862, avg=29.64, stdev=567.02 00:40:00.897 clat (usec): min=192, max=41123, avg=1909.23, stdev=8113.36 00:40:00.897 lat (usec): min=202, max=59976, avg=1938.88, stdev=8225.35 00:40:00.897 clat percentiles (usec): 00:40:00.897 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:40:00.897 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:40:00.897 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 314], 00:40:00.897 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:00.897 | 99.99th=[41157] 00:40:00.897 bw ( KiB/s): min= 96, max=13752, per=18.54%, avg=2373.17, stdev=5574.47, samples=6 00:40:00.897 iops : min= 24, max= 3438, avg=593.17, stdev=1393.68, samples=6 00:40:00.897 lat (usec) : 250=89.96%, 500=5.86% 00:40:00.897 lat (msec) : 50=4.13% 00:40:00.898 cpu : usr=0.14%, sys=0.77%, ctx=1795, majf=0, minf=2 00:40:00.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:00.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 issued rwts: total=1792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:00.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:00.898 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449129: Sun Nov 17 11:34:25 2024 00:40:00.898 read: IOPS=287, BW=1149KiB/s (1176kB/s)(4356KiB/3792msec) 00:40:00.898 slat (usec): min=4, max=12888, avg=31.85, stdev=553.21 00:40:00.898 clat (usec): min=196, max=41176, avg=3439.31, stdev=10988.21 00:40:00.898 lat (usec): min=200, max=53997, avg=3471.18, stdev=11092.64 00:40:00.898 clat percentiles (usec): 00:40:00.898 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:40:00.898 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:40:00.898 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 273], 95.00th=[41157], 00:40:00.898 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:00.898 | 99.99th=[41157] 00:40:00.898 bw ( KiB/s): min= 93, max= 8016, per=9.66%, avg=1236.00, stdev=2989.75, samples=7 00:40:00.898 iops : min= 23, max= 2004, avg=308.86, stdev=747.50, samples=7 00:40:00.898 lat (usec) : 250=86.97%, 500=4.95% 00:40:00.898 lat (msec) : 4=0.09%, 50=7.89% 00:40:00.898 cpu : usr=0.05%, sys=0.24%, ctx=1094, majf=0, minf=1 00:40:00.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:00.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:00.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:00.898 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449130: Sun Nov 17 11:34:25 2024 00:40:00.898 read: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(35.9MiB/3206msec) 00:40:00.898 slat (nsec): min=4260, max=57607, avg=8718.96, stdev=5532.99 00:40:00.898 clat (usec): min=194, max=41169, avg=335.93, stdev=1984.08 00:40:00.898 lat (usec): min=198, max=41185, avg=344.65, stdev=1984.58 00:40:00.898 clat percentiles (usec): 00:40:00.898 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:40:00.898 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 237], 00:40:00.898 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 289], 00:40:00.898 | 99.00th=[ 330], 99.50th=[ 478], 99.90th=[41157], 99.95th=[41157], 00:40:00.898 | 99.99th=[41157] 00:40:00.898 bw ( KiB/s): min= 990, max=17504, per=87.17%, avg=11155.67, stdev=6367.86, samples=6 00:40:00.898 iops : min= 247, max= 4376, avg=2788.83, stdev=1592.12, samples=6 00:40:00.898 lat (usec) : 250=67.65%, 500=31.97%, 750=0.09%, 1000=0.02% 00:40:00.898 lat (msec) : 2=0.02%, 50=0.24% 00:40:00.898 cpu : usr=1.25%, sys=3.68%, ctx=9182, majf=0, minf=2 00:40:00.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:00.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 issued rwts: total=9181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:00.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:00.898 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=449131: Sun Nov 17 11:34:25 2024 00:40:00.898 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(284KiB/2934msec) 00:40:00.898 slat (nsec): min=13482, max=49681, avg=20996.07, stdev=9122.69 00:40:00.898 clat (usec): min=40870, max=41397, avg=40979.99, stdev=60.95 00:40:00.898 lat (usec): min=40905, max=41435, avg=41001.07, stdev=60.74 00:40:00.898 clat percentiles (usec): 00:40:00.898 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:00.898 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:00.898 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:00.898 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:00.898 | 99.99th=[41157] 00:40:00.898 bw ( KiB/s): min= 95, max= 104, per=0.76%, avg=97.40, stdev= 3.71, samples=5 00:40:00.898 iops : min= 23, max= 26, avg=24.20, stdev= 1.10, samples=5 00:40:00.898 lat (msec) : 50=98.61% 00:40:00.898 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=1 00:40:00.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:00.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:00.898 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:00.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:00.898 00:40:00.898 Run status group 0 (all jobs): 00:40:00.898 READ: bw=12.5MiB/s (13.1MB/s), 96.8KiB/s-11.2MiB/s (99.1kB/s-11.7MB/s), io=47.4MiB (49.7MB), run=2934-3792msec 00:40:00.898 00:40:00.898 Disk stats (read/write): 00:40:00.898 nvme0n1: ios=1788/0, merge=0/0, ticks=3288/0, in_queue=3288, util=95.57% 00:40:00.898 nvme0n2: ios=1122/0, merge=0/0, ticks=3881/0, in_queue=3881, util=98.98% 00:40:00.898 nvme0n3: ios=8866/0, merge=0/0, ticks=3130/0, in_queue=3130, util=99.53% 00:40:00.898 nvme0n4: ios=114/0, merge=0/0, ticks=3967/0, in_queue=3967, util=99.25% 00:40:01.156 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:01.156 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:01.415 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:01.415 11:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:01.675 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:01.675 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:01.956 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:01.956 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:02.230 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:02.230 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 448932 00:40:02.230 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:02.230 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:02.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:02.505 nvmf hotplug test: fio failed as expected 00:40:02.505 11:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:02.767 rmmod nvme_tcp 00:40:02.767 rmmod nvme_fabrics 00:40:02.767 rmmod nvme_keyring 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 447026 ']' 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 447026 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 447026 ']' 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 447026 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 447026 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 447026' 00:40:02.767 killing process with pid 447026 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 447026 00:40:02.767 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 447026 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:03.025 11:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.931 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:04.931 00:40:04.931 real 0m23.590s 00:40:04.931 user 1m8.006s 00:40:04.931 sys 0m9.544s 00:40:04.931 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.931 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:04.931 ************************************ 00:40:04.931 END TEST nvmf_fio_target 00:40:04.931 ************************************ 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:05.190 ************************************ 00:40:05.190 START TEST nvmf_bdevio 00:40:05.190 ************************************ 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:05.190 * Looking for test storage... 00:40:05.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:05.190 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:05.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:05.191 --rc genhtml_branch_coverage=1 00:40:05.191 --rc genhtml_function_coverage=1 00:40:05.191 --rc genhtml_legend=1 00:40:05.191 --rc geninfo_all_blocks=1 00:40:05.191 --rc geninfo_unexecuted_blocks=1 00:40:05.191 00:40:05.191 ' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:05.191 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:05.192 11:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:07.731 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:07.731 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:07.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:07.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:07.731 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:07.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:40:07.732 00:40:07.732 --- 10.0.0.2 ping statistics --- 00:40:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.732 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:40:07.732 11:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:40:07.732 00:40:07.732 --- 10.0.0.1 ping statistics --- 00:40:07.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.732 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=451763 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 451763 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 451763 ']' 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.732 [2024-11-17 11:34:32.080336] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:07.732 [2024-11-17 11:34:32.081840] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:07.732 [2024-11-17 11:34:32.081921] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:07.732 [2024-11-17 11:34:32.155543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:07.732 [2024-11-17 11:34:32.201430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:07.732 [2024-11-17 11:34:32.201484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:07.732 [2024-11-17 11:34:32.201506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:07.732 [2024-11-17 11:34:32.201518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:07.732 [2024-11-17 11:34:32.201535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:07.732 [2024-11-17 11:34:32.203098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:07.732 [2024-11-17 11:34:32.203160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:07.732 [2024-11-17 11:34:32.203222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:07.732 [2024-11-17 11:34:32.203224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:07.732 [2024-11-17 11:34:32.285423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:07.732 [2024-11-17 11:34:32.285641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:07.732 [2024-11-17 11:34:32.285951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:07.732 [2024-11-17 11:34:32.286585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.732 [2024-11-17 11:34:32.286837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.732 [2024-11-17 11:34:32.339906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.732 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.992 Malloc0 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:07.992 [2024-11-17 11:34:32.412168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:07.992 { 00:40:07.992 "params": { 00:40:07.992 "name": "Nvme$subsystem", 00:40:07.992 "trtype": "$TEST_TRANSPORT", 00:40:07.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.992 "adrfam": "ipv4", 00:40:07.992 "trsvcid": "$NVMF_PORT", 00:40:07.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.992 "hdgst": ${hdgst:-false}, 00:40:07.992 "ddgst": ${ddgst:-false} 00:40:07.992 }, 00:40:07.992 "method": "bdev_nvme_attach_controller" 00:40:07.992 } 00:40:07.992 EOF 00:40:07.992 )") 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:07.992 11:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:07.992 "params": { 00:40:07.992 "name": "Nvme1", 00:40:07.992 "trtype": "tcp", 00:40:07.992 "traddr": "10.0.0.2", 00:40:07.992 "adrfam": "ipv4", 00:40:07.992 "trsvcid": "4420", 00:40:07.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:07.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:07.992 "hdgst": false, 00:40:07.992 "ddgst": false 00:40:07.992 }, 00:40:07.992 "method": "bdev_nvme_attach_controller" 00:40:07.992 }' 00:40:07.992 [2024-11-17 11:34:32.462251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:07.992 [2024-11-17 11:34:32.462315] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451788 ] 00:40:07.992 [2024-11-17 11:34:32.531570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:07.992 [2024-11-17 11:34:32.583732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.992 [2024-11-17 11:34:32.583782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:07.992 [2024-11-17 11:34:32.583785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.251 I/O targets: 00:40:08.251 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:08.251 00:40:08.251 00:40:08.252 CUnit - A unit testing framework for C - Version 2.1-3 00:40:08.252 http://cunit.sourceforge.net/ 00:40:08.252 00:40:08.252 00:40:08.252 Suite: bdevio tests on: Nvme1n1 00:40:08.252 Test: blockdev write read block ...passed 00:40:08.252 Test: blockdev write zeroes read block ...passed 00:40:08.252 Test: blockdev write zeroes read no split ...passed 00:40:08.252 Test: blockdev write zeroes read split ...passed 00:40:08.252 Test: blockdev write zeroes read split partial ...passed 00:40:08.252 Test: blockdev reset ...[2024-11-17 11:34:32.904795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:08.252 [2024-11-17 11:34:32.904912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b3b70 (9): Bad file descriptor 00:40:08.511 [2024-11-17 11:34:32.909292] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:08.511 passed 00:40:08.511 Test: blockdev write read 8 blocks ...passed 00:40:08.511 Test: blockdev write read size > 128k ...passed 00:40:08.511 Test: blockdev write read invalid size ...passed 00:40:08.511 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:08.511 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:08.511 Test: blockdev write read max offset ...passed 00:40:08.511 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:08.511 Test: blockdev writev readv 8 blocks ...passed 00:40:08.511 Test: blockdev writev readv 30 x 1block ...passed 00:40:08.511 Test: blockdev writev readv block ...passed 00:40:08.511 Test: blockdev writev readv size > 128k ...passed 00:40:08.511 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:08.511 Test: blockdev comparev and writev ...[2024-11-17 11:34:33.083813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.083852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.083877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.083895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.084276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.084302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.084326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.084356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.084725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.084749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.084780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.084796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.085156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.085180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.085201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:08.511 [2024-11-17 11:34:33.085218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:08.511 passed 00:40:08.511 Test: blockdev nvme passthru rw ...passed 00:40:08.511 Test: blockdev nvme passthru vendor specific ...[2024-11-17 11:34:33.166763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:08.511 [2024-11-17 11:34:33.166791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:08.511 [2024-11-17 11:34:33.166941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:08.511 [2024-11-17 11:34:33.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:08.770 [2024-11-17 11:34:33.167109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:08.770 [2024-11-17 11:34:33.167134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:08.770 [2024-11-17 11:34:33.167279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:08.770 [2024-11-17 11:34:33.167303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:08.770 passed 00:40:08.770 Test: blockdev nvme admin passthru ...passed 00:40:08.770 Test: blockdev copy ...passed 00:40:08.770 00:40:08.770 Run Summary: Type Total Ran Passed Failed Inactive 00:40:08.770 suites 1 1 n/a 0 0 00:40:08.770 tests 23 23 23 0 0 00:40:08.770 asserts 152 152 152 0 n/a 00:40:08.770 00:40:08.770 Elapsed time = 0.861 seconds 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.770 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.770 rmmod nvme_tcp 00:40:08.770 rmmod nvme_fabrics 00:40:09.029 rmmod nvme_keyring 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 451763 ']' 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 451763 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 451763 ']' 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 451763 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 451763 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 451763' 00:40:09.029 killing process with pid 451763 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 451763 00:40:09.029 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 451763 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.287 11:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.189 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:11.189 00:40:11.189 real 0m6.101s 00:40:11.189 user 0m7.210s 00:40:11.189 sys 0m2.437s 00:40:11.189 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.189 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:11.189 ************************************ 00:40:11.189 END TEST nvmf_bdevio 00:40:11.189 ************************************ 00:40:11.189 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:11.189 00:40:11.189 real 3m54.996s 00:40:11.189 user 8m55.592s 00:40:11.189 sys 1m23.702s 00:40:11.189 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:11.189 11:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:11.189 ************************************ 00:40:11.189 END TEST nvmf_target_core_interrupt_mode 00:40:11.189 ************************************ 00:40:11.189 11:34:35 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:11.189 11:34:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:11.189 11:34:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:11.189 11:34:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:11.190 ************************************ 00:40:11.190 START TEST nvmf_interrupt 00:40:11.190 ************************************ 00:40:11.190 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:11.448 * Looking for test storage... 00:40:11.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:11.448 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.449 --rc genhtml_branch_coverage=1 00:40:11.449 --rc genhtml_function_coverage=1 00:40:11.449 --rc genhtml_legend=1 00:40:11.449 --rc geninfo_all_blocks=1 00:40:11.449 --rc geninfo_unexecuted_blocks=1 00:40:11.449 00:40:11.449 ' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.449 --rc genhtml_branch_coverage=1 00:40:11.449 --rc genhtml_function_coverage=1 00:40:11.449 --rc genhtml_legend=1 00:40:11.449 --rc geninfo_all_blocks=1 00:40:11.449 --rc geninfo_unexecuted_blocks=1 00:40:11.449 00:40:11.449 ' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.449 --rc genhtml_branch_coverage=1 00:40:11.449 --rc genhtml_function_coverage=1 00:40:11.449 --rc genhtml_legend=1 00:40:11.449 --rc geninfo_all_blocks=1 00:40:11.449 --rc geninfo_unexecuted_blocks=1 00:40:11.449 00:40:11.449 ' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:11.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:11.449 --rc genhtml_branch_coverage=1 00:40:11.449 --rc genhtml_function_coverage=1 00:40:11.449 --rc genhtml_legend=1 00:40:11.449 --rc geninfo_all_blocks=1 00:40:11.449 --rc geninfo_unexecuted_blocks=1 00:40:11.449 00:40:11.449 ' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:11.449 11:34:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:13.354 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:13.355 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:13.355 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:13.355 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:13.355 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:13.355 11:34:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:13.355 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:13.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:13.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:40:13.614 00:40:13.614 --- 10.0.0.2 ping statistics --- 00:40:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.614 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:13.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:13.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:40:13.614 00:40:13.614 --- 10.0.0.1 ping statistics --- 00:40:13.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:13.614 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=453874 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 453874 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 453874 ']' 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:13.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:13.614 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.614 [2024-11-17 11:34:38.188848] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:13.614 [2024-11-17 11:34:38.189888] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:13.614 [2024-11-17 11:34:38.189955] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.614 [2024-11-17 11:34:38.260032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:13.873 [2024-11-17 11:34:38.304946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.873 [2024-11-17 11:34:38.305004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.873 [2024-11-17 11:34:38.305028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.873 [2024-11-17 11:34:38.305055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.873 [2024-11-17 11:34:38.305073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.873 [2024-11-17 11:34:38.306454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.873 [2024-11-17 11:34:38.306460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.873 [2024-11-17 11:34:38.389016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:13.873 [2024-11-17 11:34:38.389046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:13.873 [2024-11-17 11:34:38.389283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:13.873 5000+0 records in 00:40:13.873 5000+0 records out 00:40:13.873 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0144089 s, 711 MB/s 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.873 AIO0 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.873 [2024-11-17 11:34:38.503138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.873 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:13.874 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:13.874 [2024-11-17 11:34:38.527458] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 453874 0 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453874 0 idle 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:14.133 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453874 root 20 0 128.2g 46848 34176 S 6.7 0.1 0:00.25 reactor_0' 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453874 root 20 0 128.2g 46848 34176 S 6.7 0.1 0:00.25 reactor_0 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 453874 1 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453874 1 idle 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:14.134 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453878 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453878 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=454037 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 453874 0 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 453874 0 busy 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:14.393 11:34:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:14.393 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453874 root 20 0 128.2g 47616 34560 S 6.2 0.1 0:00.26 reactor_0' 00:40:14.393 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453874 root 20 0 128.2g 47616 34560 S 6.2 0.1 0:00.26 reactor_0 00:40:14.393 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:14.393 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:14.393 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:40:14.651 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:14.651 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:14.651 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:14.651 11:34:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453874 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.45 reactor_0' 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453874 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:02.45 reactor_0 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 453874 1 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 453874 1 busy 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.584 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:15.585 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.585 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.585 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.585 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:15.585 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453878 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:01.27 reactor_1' 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453878 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:01.27 reactor_1 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.843 11:34:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 454037 00:40:25.813 Initializing NVMe Controllers 00:40:25.813 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:25.813 Controller IO queue size 256, less than required. 00:40:25.813 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:25.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:25.813 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:25.813 Initialization complete. Launching workers. 00:40:25.813 ======================================================== 00:40:25.813 Latency(us) 00:40:25.813 Device Information : IOPS MiB/s Average min max 00:40:25.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13340.10 52.11 19204.58 3898.95 23543.97 00:40:25.813 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13714.30 53.57 18680.71 4156.79 59231.96 00:40:25.813 ======================================================== 00:40:25.813 Total : 27054.39 105.68 18939.02 3898.95 59231.96 00:40:25.813 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 453874 0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453874 0 idle 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453874 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.20 reactor_0' 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453874 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.20 reactor_0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 453874 1 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453874 1 idle 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453878 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.97 reactor_1' 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453878 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.97 reactor_1 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:25.813 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:25.814 11:34:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:27.190 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:27.190 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:27.190 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 453874 0 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453874 0 idle 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:27.449 11:34:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453874 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.30 reactor_0' 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453874 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.30 reactor_0 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 453874 1 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453874 1 idle 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453874 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453874 -w 256 00:40:27.449 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453878 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.00 reactor_1' 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453878 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.00 reactor_1 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:27.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:27.708 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:27.966 rmmod nvme_tcp 00:40:27.966 rmmod nvme_fabrics 00:40:27.966 rmmod nvme_keyring 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 453874 ']' 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 453874 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 453874 ']' 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 453874 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453874 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453874' 00:40:27.966 killing process with pid 453874 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 453874 00:40:27.966 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 453874 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:28.225 11:34:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.126 11:34:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:30.126 00:40:30.126 real 0m18.911s 00:40:30.126 user 0m36.920s 00:40:30.126 sys 0m6.617s 00:40:30.126 11:34:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.126 11:34:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:30.126 ************************************ 00:40:30.126 END TEST nvmf_interrupt 00:40:30.126 ************************************ 00:40:30.126 00:40:30.126 real 33m0.438s 00:40:30.126 user 87m23.475s 00:40:30.126 sys 7m58.623s 00:40:30.126 11:34:54 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.126 11:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.126 ************************************ 00:40:30.126 END TEST nvmf_tcp 00:40:30.126 ************************************ 00:40:30.126 11:34:54 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:30.126 11:34:54 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:30.126 11:34:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:30.126 11:34:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.126 11:34:54 -- common/autotest_common.sh@10 -- # set +x 00:40:30.386 ************************************ 00:40:30.386 START TEST spdkcli_nvmf_tcp 00:40:30.386 ************************************ 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:30.386 * Looking for test storage... 00:40:30.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:30.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.386 --rc genhtml_branch_coverage=1 00:40:30.386 --rc genhtml_function_coverage=1 00:40:30.386 --rc genhtml_legend=1 00:40:30.386 --rc geninfo_all_blocks=1 00:40:30.386 --rc geninfo_unexecuted_blocks=1 00:40:30.386 00:40:30.386 ' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:30.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.386 --rc genhtml_branch_coverage=1 00:40:30.386 --rc genhtml_function_coverage=1 00:40:30.386 --rc genhtml_legend=1 00:40:30.386 --rc geninfo_all_blocks=1 00:40:30.386 --rc geninfo_unexecuted_blocks=1 00:40:30.386 00:40:30.386 ' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:30.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.386 --rc genhtml_branch_coverage=1 00:40:30.386 --rc genhtml_function_coverage=1 00:40:30.386 --rc genhtml_legend=1 00:40:30.386 --rc geninfo_all_blocks=1 00:40:30.386 --rc geninfo_unexecuted_blocks=1 00:40:30.386 00:40:30.386 ' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:30.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.386 --rc genhtml_branch_coverage=1 00:40:30.386 --rc genhtml_function_coverage=1 00:40:30.386 --rc genhtml_legend=1 00:40:30.386 --rc geninfo_all_blocks=1 00:40:30.386 --rc geninfo_unexecuted_blocks=1 00:40:30.386 00:40:30.386 ' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.386 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:30.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=456045 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 456045 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 456045 ']' 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:30.387 11:34:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.387 [2024-11-17 11:34:54.998162] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:30.387 [2024-11-17 11:34:54.998269] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid456045 ] 00:40:30.646 [2024-11-17 11:34:55.063992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:30.646 [2024-11-17 11:34:55.113373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:30.646 [2024-11-17 11:34:55.113377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.646 11:34:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:30.646 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:30.646 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:30.646 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:30.646 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:30.646 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:30.646 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:30.646 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:30.646 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:30.646 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:30.646 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:30.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:30.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:30.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:30.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:30.647 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:30.647 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:30.647 ' 00:40:33.950 [2024-11-17 11:34:57.940680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:34.887 [2024-11-17 11:34:59.213052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:37.416 [2024-11-17 11:35:01.616362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:39.315 [2024-11-17 11:35:03.622389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:40.689 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:40.689 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:40.689 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:40.689 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:40.689 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:40.689 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:40.689 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:40.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:40.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:40.689 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:40.689 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:40.689 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:40.689 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:40.690 11:35:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:41.256 11:35:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:41.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:41.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:41.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:41.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:41.256 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:41.256 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:41.256 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:41.256 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:41.256 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:41.256 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:41.256 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:41.256 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:41.256 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:41.256 ' 00:40:46.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:46.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:46.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:46.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:46.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:46.526 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:46.526 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:46.526 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:46.526 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:46.526 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:46.526 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:46.526 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:46.526 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:46.526 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 456045 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 456045 ']' 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 456045 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 456045 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:46.783 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:46.784 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 456045' 00:40:46.784 killing process with pid 456045 00:40:46.784 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 456045 00:40:46.784 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 456045 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 456045 ']' 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 456045 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 456045 ']' 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 456045 00:40:47.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (456045) - No such process 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 456045 is not found' 00:40:47.043 Process with pid 456045 is not found 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:47.043 00:40:47.043 real 0m16.675s 00:40:47.043 user 0m35.579s 00:40:47.043 sys 0m0.863s 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:47.043 11:35:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:47.043 ************************************ 00:40:47.043 END TEST spdkcli_nvmf_tcp 00:40:47.043 ************************************ 00:40:47.043 11:35:11 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:47.043 11:35:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:47.043 11:35:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:47.043 11:35:11 -- common/autotest_common.sh@10 -- # set +x 00:40:47.043 ************************************ 00:40:47.043 START TEST nvmf_identify_passthru 00:40:47.043 ************************************ 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:47.043 * Looking for test storage... 00:40:47.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:47.043 11:35:11 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.043 --rc genhtml_branch_coverage=1 00:40:47.043 --rc genhtml_function_coverage=1 00:40:47.043 --rc genhtml_legend=1 00:40:47.043 --rc geninfo_all_blocks=1 00:40:47.043 --rc geninfo_unexecuted_blocks=1 00:40:47.043 00:40:47.043 ' 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.043 --rc genhtml_branch_coverage=1 00:40:47.043 --rc genhtml_function_coverage=1 00:40:47.043 --rc genhtml_legend=1 00:40:47.043 --rc geninfo_all_blocks=1 00:40:47.043 --rc geninfo_unexecuted_blocks=1 00:40:47.043 00:40:47.043 ' 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.043 --rc genhtml_branch_coverage=1 00:40:47.043 --rc genhtml_function_coverage=1 00:40:47.043 --rc genhtml_legend=1 00:40:47.043 --rc geninfo_all_blocks=1 00:40:47.043 --rc geninfo_unexecuted_blocks=1 00:40:47.043 00:40:47.043 ' 00:40:47.043 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:47.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:47.043 --rc genhtml_branch_coverage=1 00:40:47.043 --rc genhtml_function_coverage=1 00:40:47.043 --rc genhtml_legend=1 00:40:47.043 --rc geninfo_all_blocks=1 00:40:47.043 --rc geninfo_unexecuted_blocks=1 00:40:47.043 00:40:47.043 ' 00:40:47.043 11:35:11 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:47.043 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:47.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:47.044 11:35:11 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:47.044 11:35:11 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:47.044 11:35:11 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:47.044 11:35:11 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:47.044 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:47.044 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:47.044 11:35:11 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:47.044 11:35:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:49.578 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:49.578 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:49.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.578 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:49.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:49.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:49.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:40:49.579 00:40:49.579 --- 10.0.0.2 ping statistics --- 00:40:49.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.579 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:49.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:49.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:40:49.579 00:40:49.579 --- 10.0.0.1 ping statistics --- 00:40:49.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.579 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:49.579 11:35:13 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:40:49.579 11:35:13 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:49.579 11:35:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:53.767 11:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:53.767 11:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:53.767 11:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:53.767 11:35:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=460665 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 460665 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 460665 ']' 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:57.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:57.964 [2024-11-17 11:35:22.330590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:40:57.964 [2024-11-17 11:35:22.330668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:57.964 [2024-11-17 11:35:22.400129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:57.964 [2024-11-17 11:35:22.444340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:57.964 [2024-11-17 11:35:22.444403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:57.964 [2024-11-17 11:35:22.444436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:57.964 [2024-11-17 11:35:22.444448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:57.964 [2024-11-17 11:35:22.444457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:57.964 [2024-11-17 11:35:22.445955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.964 [2024-11-17 11:35:22.446022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:57.964 [2024-11-17 11:35:22.446083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:57.964 [2024-11-17 11:35:22.446086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:57.964 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.964 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:57.964 INFO: Log level set to 20 00:40:57.964 INFO: Requests: 00:40:57.964 { 00:40:57.964 "jsonrpc": "2.0", 00:40:57.964 "method": "nvmf_set_config", 00:40:57.964 "id": 1, 00:40:57.964 "params": { 00:40:57.964 "admin_cmd_passthru": { 00:40:57.964 "identify_ctrlr": true 00:40:57.965 } 00:40:57.965 } 00:40:57.965 } 00:40:57.965 00:40:57.965 INFO: response: 00:40:57.965 { 00:40:57.965 "jsonrpc": "2.0", 00:40:57.965 "id": 1, 00:40:57.965 "result": true 00:40:57.965 } 00:40:57.965 00:40:57.965 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:57.965 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:57.965 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:57.965 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:57.965 INFO: Setting log level to 20 00:40:57.965 INFO: Setting log level to 20 00:40:57.965 INFO: Log level set to 20 00:40:57.965 INFO: Log level set to 20 00:40:57.965 INFO: Requests: 00:40:57.965 { 00:40:57.965 "jsonrpc": "2.0", 00:40:57.965 "method": "framework_start_init", 00:40:57.965 "id": 1 00:40:57.965 } 00:40:57.965 00:40:57.965 INFO: Requests: 00:40:57.965 { 00:40:57.965 "jsonrpc": "2.0", 00:40:57.965 "method": "framework_start_init", 00:40:57.965 "id": 1 00:40:57.965 } 00:40:57.965 00:40:58.223 [2024-11-17 11:35:22.652140] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:58.223 INFO: response: 00:40:58.223 { 00:40:58.223 "jsonrpc": "2.0", 00:40:58.223 "id": 1, 00:40:58.223 "result": true 00:40:58.223 } 00:40:58.223 00:40:58.223 INFO: response: 00:40:58.223 { 00:40:58.223 "jsonrpc": "2.0", 00:40:58.223 "id": 1, 00:40:58.223 "result": true 00:40:58.223 } 00:40:58.223 00:40:58.223 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.223 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:58.223 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.223 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.223 INFO: Setting log level to 40 00:40:58.223 INFO: Setting log level to 40 00:40:58.223 INFO: Setting log level to 40 00:40:58.223 [2024-11-17 11:35:22.662157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:58.223 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:58.223 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:58.223 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:58.223 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.223 11:35:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:58.224 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:58.224 11:35:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.505 Nvme0n1 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.505 [2024-11-17 11:35:25.553296] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.505 [ 00:41:01.505 { 00:41:01.505 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:01.505 "subtype": "Discovery", 00:41:01.505 "listen_addresses": [], 00:41:01.505 "allow_any_host": true, 00:41:01.505 "hosts": [] 00:41:01.505 }, 00:41:01.505 { 00:41:01.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:01.505 "subtype": "NVMe", 00:41:01.505 "listen_addresses": [ 00:41:01.505 { 00:41:01.505 "trtype": "TCP", 00:41:01.505 "adrfam": "IPv4", 00:41:01.505 "traddr": "10.0.0.2", 00:41:01.505 "trsvcid": "4420" 00:41:01.505 } 00:41:01.505 ], 00:41:01.505 "allow_any_host": true, 00:41:01.505 "hosts": [], 00:41:01.505 "serial_number": "SPDK00000000000001", 00:41:01.505 "model_number": "SPDK bdev Controller", 00:41:01.505 "max_namespaces": 1, 00:41:01.505 "min_cntlid": 1, 00:41:01.505 "max_cntlid": 65519, 00:41:01.505 "namespaces": [ 00:41:01.505 { 00:41:01.505 "nsid": 1, 00:41:01.505 "bdev_name": "Nvme0n1", 00:41:01.505 "name": "Nvme0n1", 00:41:01.505 "nguid": "5760788E29E543F4998077F777B9A4C6", 00:41:01.505 "uuid": "5760788e-29e5-43f4-9980-77f777b9a4c6" 00:41:01.505 } 00:41:01.505 ] 00:41:01.505 } 00:41:01.505 ] 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.505 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:01.505 11:35:25 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:01.505 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:01.505 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:01.505 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:01.505 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:01.505 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:01.505 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:01.505 rmmod nvme_tcp 00:41:01.506 rmmod nvme_fabrics 00:41:01.506 rmmod nvme_keyring 00:41:01.506 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:01.506 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:01.506 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:01.506 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 460665 ']' 00:41:01.506 11:35:25 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 460665 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 460665 ']' 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 460665 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 460665 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 460665' 00:41:01.506 killing process with pid 460665 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 460665 00:41:01.506 11:35:25 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 460665 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:02.880 11:35:27 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:02.880 11:35:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:02.880 11:35:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.417 11:35:29 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:05.417 00:41:05.417 real 0m17.952s 00:41:05.417 user 0m26.546s 00:41:05.417 sys 0m2.293s 00:41:05.417 11:35:29 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:05.417 11:35:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:05.417 ************************************ 00:41:05.417 END TEST nvmf_identify_passthru 00:41:05.417 ************************************ 00:41:05.417 11:35:29 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:05.417 11:35:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:05.417 11:35:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:05.417 11:35:29 -- common/autotest_common.sh@10 -- # set +x 00:41:05.417 ************************************ 00:41:05.417 START TEST nvmf_dif 00:41:05.417 ************************************ 00:41:05.417 11:35:29 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:05.417 * Looking for test storage... 00:41:05.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:05.417 11:35:29 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:05.417 11:35:29 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:41:05.417 11:35:29 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:05.417 11:35:29 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:05.417 11:35:29 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:05.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.418 --rc genhtml_branch_coverage=1 00:41:05.418 --rc genhtml_function_coverage=1 00:41:05.418 --rc genhtml_legend=1 00:41:05.418 --rc geninfo_all_blocks=1 00:41:05.418 --rc geninfo_unexecuted_blocks=1 00:41:05.418 00:41:05.418 ' 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:05.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.418 --rc genhtml_branch_coverage=1 00:41:05.418 --rc genhtml_function_coverage=1 00:41:05.418 --rc genhtml_legend=1 00:41:05.418 --rc geninfo_all_blocks=1 00:41:05.418 --rc geninfo_unexecuted_blocks=1 00:41:05.418 00:41:05.418 ' 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:05.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.418 --rc genhtml_branch_coverage=1 00:41:05.418 --rc genhtml_function_coverage=1 00:41:05.418 --rc genhtml_legend=1 00:41:05.418 --rc geninfo_all_blocks=1 00:41:05.418 --rc geninfo_unexecuted_blocks=1 00:41:05.418 00:41:05.418 ' 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:05.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:05.418 --rc genhtml_branch_coverage=1 00:41:05.418 --rc genhtml_function_coverage=1 00:41:05.418 --rc genhtml_legend=1 00:41:05.418 --rc geninfo_all_blocks=1 00:41:05.418 --rc geninfo_unexecuted_blocks=1 00:41:05.418 00:41:05.418 ' 00:41:05.418 11:35:29 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:05.418 11:35:29 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:05.418 11:35:29 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.418 11:35:29 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.418 11:35:29 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.418 11:35:29 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:05.418 11:35:29 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:05.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:05.418 11:35:29 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:05.418 11:35:29 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:05.418 11:35:29 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:05.418 11:35:29 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:05.418 11:35:29 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:05.418 11:35:29 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:05.418 11:35:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:07.321 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:07.321 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:07.321 11:35:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:07.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:07.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:07.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:07.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:41:07.322 00:41:07.322 --- 10.0.0.2 ping statistics --- 00:41:07.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:07.322 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:07.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:07.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:41:07.322 00:41:07.322 --- 10.0.0.1 ping statistics --- 00:41:07.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:07.322 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:07.322 11:35:31 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:08.698 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:08.698 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:08.698 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:08.698 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:08.698 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:08.698 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:08.698 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:08.698 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:08.698 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:08.698 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:08.698 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:08.698 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:08.698 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:08.698 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:08.698 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:08.698 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:08.698 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:08.698 11:35:33 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:08.698 11:35:33 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=463810 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:08.698 11:35:33 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 463810 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 463810 ']' 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:08.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:08.698 11:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.698 [2024-11-17 11:35:33.235930] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:41:08.698 [2024-11-17 11:35:33.236021] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:08.699 [2024-11-17 11:35:33.308979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:08.957 [2024-11-17 11:35:33.358970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:08.957 [2024-11-17 11:35:33.359026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:08.957 [2024-11-17 11:35:33.359054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:08.957 [2024-11-17 11:35:33.359074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:08.957 [2024-11-17 11:35:33.359085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:08.957 [2024-11-17 11:35:33.359722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:08.957 11:35:33 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 11:35:33 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:08.957 11:35:33 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:08.957 11:35:33 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 [2024-11-17 11:35:33.498175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.957 11:35:33 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 ************************************ 00:41:08.957 START TEST fio_dif_1_default 00:41:08.957 ************************************ 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 bdev_null0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:08.957 [2024-11-17 11:35:33.554483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:08.957 { 00:41:08.957 "params": { 00:41:08.957 "name": "Nvme$subsystem", 00:41:08.957 "trtype": "$TEST_TRANSPORT", 00:41:08.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:08.957 "adrfam": "ipv4", 00:41:08.957 "trsvcid": "$NVMF_PORT", 00:41:08.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:08.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:08.957 "hdgst": ${hdgst:-false}, 00:41:08.957 "ddgst": ${ddgst:-false} 00:41:08.957 }, 00:41:08.957 "method": "bdev_nvme_attach_controller" 00:41:08.957 } 00:41:08.957 EOF 00:41:08.957 )") 00:41:08.957 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:08.958 "params": { 00:41:08.958 "name": "Nvme0", 00:41:08.958 "trtype": "tcp", 00:41:08.958 "traddr": "10.0.0.2", 00:41:08.958 "adrfam": "ipv4", 00:41:08.958 "trsvcid": "4420", 00:41:08.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:08.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:08.958 "hdgst": false, 00:41:08.958 "ddgst": false 00:41:08.958 }, 00:41:08.958 "method": "bdev_nvme_attach_controller" 00:41:08.958 }' 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:08.958 11:35:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:09.216 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:09.216 fio-3.35 00:41:09.216 Starting 1 thread 00:41:21.418 00:41:21.418 filename0: (groupid=0, jobs=1): err= 0: pid=464041: Sun Nov 17 11:35:44 2024 00:41:21.418 read: IOPS=192, BW=769KiB/s (788kB/s)(7696KiB/10007msec) 00:41:21.418 slat (nsec): min=5673, max=70339, avg=8697.66, stdev=3556.93 00:41:21.418 clat (usec): min=522, max=44961, avg=20775.32, stdev=20336.40 00:41:21.418 lat (usec): min=528, max=44976, avg=20784.02, stdev=20336.28 00:41:21.418 clat percentiles (usec): 00:41:21.418 | 1.00th=[ 570], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 652], 00:41:21.418 | 30.00th=[ 668], 40.00th=[ 685], 50.00th=[ 775], 60.00th=[41157], 00:41:21.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:21.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:41:21.418 | 99.99th=[44827] 00:41:21.418 bw ( KiB/s): min= 704, max= 832, per=99.73%, avg=768.00, stdev=45.25, samples=20 00:41:21.418 iops : min= 176, max= 208, avg=192.00, stdev=11.31, samples=20 00:41:21.418 lat (usec) : 750=48.80%, 1000=1.72% 00:41:21.418 lat (msec) : 50=49.48% 00:41:21.418 cpu : usr=90.04%, sys=9.67%, ctx=16, majf=0, minf=188 00:41:21.418 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:21.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:21.418 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:21.418 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:21.418 00:41:21.418 Run status group 0 (all jobs): 00:41:21.418 READ: bw=769KiB/s (788kB/s), 769KiB/s-769KiB/s (788kB/s-788kB/s), io=7696KiB (7881kB), run=10007-10007msec 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.418 00:41:21.418 real 0m11.212s 00:41:21.418 user 0m10.157s 00:41:21.418 sys 0m1.285s 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.418 11:35:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:21.418 ************************************ 00:41:21.418 END TEST fio_dif_1_default 00:41:21.418 ************************************ 00:41:21.418 11:35:44 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:21.418 11:35:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:21.419 11:35:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 ************************************ 00:41:21.419 START TEST fio_dif_1_multi_subsystems 00:41:21.419 ************************************ 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 bdev_null0 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 [2024-11-17 11:35:44.805636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 bdev_null1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:21.419 { 00:41:21.419 "params": { 00:41:21.419 "name": "Nvme$subsystem", 00:41:21.419 "trtype": "$TEST_TRANSPORT", 00:41:21.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:21.419 "adrfam": "ipv4", 00:41:21.419 "trsvcid": "$NVMF_PORT", 00:41:21.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:21.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:21.419 "hdgst": ${hdgst:-false}, 00:41:21.419 "ddgst": ${ddgst:-false} 00:41:21.419 }, 00:41:21.419 "method": "bdev_nvme_attach_controller" 00:41:21.419 } 00:41:21.419 EOF 00:41:21.419 )") 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:21.419 { 00:41:21.419 "params": { 00:41:21.419 "name": "Nvme$subsystem", 00:41:21.419 "trtype": "$TEST_TRANSPORT", 00:41:21.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:21.419 "adrfam": "ipv4", 00:41:21.419 "trsvcid": "$NVMF_PORT", 00:41:21.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:21.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:21.419 "hdgst": ${hdgst:-false}, 00:41:21.419 "ddgst": ${ddgst:-false} 00:41:21.419 }, 00:41:21.419 "method": "bdev_nvme_attach_controller" 00:41:21.419 } 00:41:21.419 EOF 00:41:21.419 )") 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:21.419 "params": { 00:41:21.419 "name": "Nvme0", 00:41:21.419 "trtype": "tcp", 00:41:21.419 "traddr": "10.0.0.2", 00:41:21.419 "adrfam": "ipv4", 00:41:21.419 "trsvcid": "4420", 00:41:21.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:21.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:21.419 "hdgst": false, 00:41:21.419 "ddgst": false 00:41:21.419 }, 00:41:21.419 "method": "bdev_nvme_attach_controller" 00:41:21.419 },{ 00:41:21.419 "params": { 00:41:21.419 "name": "Nvme1", 00:41:21.419 "trtype": "tcp", 00:41:21.419 "traddr": "10.0.0.2", 00:41:21.419 "adrfam": "ipv4", 00:41:21.419 "trsvcid": "4420", 00:41:21.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:21.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:21.419 "hdgst": false, 00:41:21.419 "ddgst": false 00:41:21.419 }, 00:41:21.419 "method": "bdev_nvme_attach_controller" 00:41:21.419 }' 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:21.419 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:21.420 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:21.420 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:21.420 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:21.420 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:21.420 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:21.420 11:35:44 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:21.420 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:21.420 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:21.420 fio-3.35 00:41:21.420 Starting 2 threads 00:41:31.565 00:41:31.565 filename0: (groupid=0, jobs=1): err= 0: pid=465487: Sun Nov 17 11:35:55 2024 00:41:31.565 read: IOPS=147, BW=589KiB/s (603kB/s)(5904KiB/10029msec) 00:41:31.565 slat (nsec): min=5801, max=33040, avg=8787.22, stdev=2950.53 00:41:31.565 clat (usec): min=543, max=46000, avg=27151.65, stdev=19329.05 00:41:31.565 lat (usec): min=550, max=46027, avg=27160.43, stdev=19329.01 00:41:31.565 clat percentiles (usec): 00:41:31.565 | 1.00th=[ 562], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 635], 00:41:31.565 | 30.00th=[ 676], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:31.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:31.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:41:31.565 | 99.99th=[45876] 00:41:31.565 bw ( KiB/s): min= 352, max= 896, per=42.47%, avg=588.80, stdev=200.08, samples=20 00:41:31.565 iops : min= 88, max= 224, avg=147.20, stdev=50.02, samples=20 00:41:31.565 lat (usec) : 750=32.32%, 1000=2.10% 00:41:31.565 lat (msec) : 2=0.27%, 50=65.31% 00:41:31.565 cpu : usr=94.38%, sys=5.32%, ctx=15, majf=0, minf=123 00:41:31.565 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:31.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.565 issued rwts: total=1476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.565 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:31.565 filename1: (groupid=0, jobs=1): err= 0: pid=465488: Sun Nov 17 11:35:55 2024 00:41:31.566 read: IOPS=199, BW=796KiB/s (815kB/s)(7984KiB/10030msec) 00:41:31.566 slat (nsec): min=6680, max=85964, avg=8612.69, stdev=3121.19 00:41:31.566 clat (usec): min=542, max=45970, avg=20073.52, stdev=20287.78 00:41:31.566 lat (usec): min=549, max=45998, avg=20082.13, stdev=20287.71 00:41:31.566 clat percentiles (usec): 00:41:31.566 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 668], 00:41:31.566 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 914], 60.00th=[41157], 00:41:31.566 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:31.566 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:41:31.566 | 99.99th=[45876] 00:41:31.566 bw ( KiB/s): min= 640, max= 1152, per=57.49%, avg=796.80, stdev=124.54, samples=20 00:41:31.566 iops : min= 160, max= 288, avg=199.20, stdev=31.14, samples=20 00:41:31.566 lat (usec) : 750=40.68%, 1000=10.52% 00:41:31.566 lat (msec) : 2=1.10%, 50=47.70% 00:41:31.566 cpu : usr=94.44%, sys=5.25%, ctx=15, majf=0, minf=187 00:41:31.566 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:31.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:31.566 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:31.566 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:31.566 00:41:31.566 Run status group 0 (all jobs): 00:41:31.566 READ: bw=1385KiB/s (1418kB/s), 589KiB/s-796KiB/s (603kB/s-815kB/s), io=13.6MiB (14.2MB), run=10029-10030msec 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.566 00:41:31.566 real 0m11.398s 00:41:31.566 user 0m20.300s 00:41:31.566 sys 0m1.361s 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.566 11:35:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:31.566 ************************************ 00:41:31.566 END TEST fio_dif_1_multi_subsystems 00:41:31.566 ************************************ 00:41:31.566 11:35:56 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:31.566 11:35:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:31.566 11:35:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:31.566 11:35:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:31.825 ************************************ 00:41:31.825 START TEST fio_dif_rand_params 00:41:31.825 ************************************ 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.825 bdev_null0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:31.825 [2024-11-17 11:35:56.260579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:31.825 { 00:41:31.825 "params": { 00:41:31.825 "name": "Nvme$subsystem", 00:41:31.825 "trtype": "$TEST_TRANSPORT", 00:41:31.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:31.825 "adrfam": "ipv4", 00:41:31.825 "trsvcid": "$NVMF_PORT", 00:41:31.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:31.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:31.825 "hdgst": ${hdgst:-false}, 00:41:31.825 "ddgst": ${ddgst:-false} 00:41:31.825 }, 00:41:31.825 "method": "bdev_nvme_attach_controller" 00:41:31.825 } 00:41:31.825 EOF 00:41:31.825 )") 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:31.825 "params": { 00:41:31.825 "name": "Nvme0", 00:41:31.825 "trtype": "tcp", 00:41:31.825 "traddr": "10.0.0.2", 00:41:31.825 "adrfam": "ipv4", 00:41:31.825 "trsvcid": "4420", 00:41:31.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:31.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:31.825 "hdgst": false, 00:41:31.825 "ddgst": false 00:41:31.825 }, 00:41:31.825 "method": "bdev_nvme_attach_controller" 00:41:31.825 }' 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:31.825 11:35:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.084 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:32.084 ... 00:41:32.084 fio-3.35 00:41:32.084 Starting 3 threads 00:41:38.640 00:41:38.640 filename0: (groupid=0, jobs=1): err= 0: pid=466843: Sun Nov 17 11:36:02 2024 00:41:38.640 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(148MiB/5021msec) 00:41:38.640 slat (nsec): min=7109, max=93427, avg=18006.64, stdev=9730.05 00:41:38.640 clat (usec): min=4850, max=54242, avg=12718.83, stdev=6544.97 00:41:38.640 lat (usec): min=4861, max=54260, avg=12736.83, stdev=6544.61 00:41:38.640 clat percentiles (usec): 00:41:38.640 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10945], 00:41:38.640 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:41:38.640 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:41:38.640 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:41:38.640 | 99.99th=[54264] 00:41:38.640 bw ( KiB/s): min=21760, max=33280, per=34.71%, avg=30182.40, stdev=3515.17, samples=10 00:41:38.640 iops : min= 170, max= 260, avg=235.80, stdev=27.46, samples=10 00:41:38.640 lat (msec) : 10=11.08%, 20=86.38%, 50=0.08%, 100=2.45% 00:41:38.640 cpu : usr=84.88%, sys=10.74%, ctx=212, majf=0, minf=84 00:41:38.640 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.640 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.640 issued rwts: total=1182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.640 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:38.640 filename0: (groupid=0, jobs=1): err= 0: pid=466844: Sun Nov 17 11:36:02 2024 00:41:38.640 read: IOPS=231, BW=28.9MiB/s (30.4MB/s)(145MiB/5005msec) 00:41:38.640 slat (nsec): min=4535, max=51583, avg=14829.05, stdev=4553.65 00:41:38.640 clat (usec): min=4697, max=52406, avg=12933.81, stdev=3747.54 00:41:38.640 lat (usec): min=4709, max=52419, avg=12948.63, stdev=3747.80 00:41:38.640 clat percentiles (usec): 00:41:38.640 | 1.00th=[ 5473], 5.00th=[ 8029], 10.00th=[ 8717], 20.00th=[10028], 00:41:38.640 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13173], 60.00th=[13698], 00:41:38.640 | 70.00th=[14353], 80.00th=[15139], 90.00th=[15795], 95.00th=[16581], 00:41:38.640 | 99.00th=[17695], 99.50th=[45351], 99.90th=[51643], 99.95th=[52167], 00:41:38.640 | 99.99th=[52167] 00:41:38.640 bw ( KiB/s): min=26624, max=34816, per=34.03%, avg=29593.60, stdev=2619.61, samples=10 00:41:38.640 iops : min= 208, max= 272, avg=231.20, stdev=20.47, samples=10 00:41:38.640 lat (msec) : 10=20.28%, 20=78.95%, 50=0.52%, 100=0.26% 00:41:38.640 cpu : usr=92.67%, sys=6.77%, ctx=21, majf=0, minf=81 00:41:38.640 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.640 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.641 issued rwts: total=1159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:38.641 filename0: (groupid=0, jobs=1): err= 0: pid=466845: Sun Nov 17 11:36:02 2024 00:41:38.641 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5044msec) 00:41:38.641 slat (nsec): min=4263, max=48865, avg=14191.89, stdev=4086.27 00:41:38.641 clat (usec): min=7367, max=98007, avg=13877.06, stdev=7752.66 00:41:38.641 lat (usec): min=7381, max=98019, avg=13891.25, stdev=7752.65 00:41:38.641 clat percentiles (usec): 00:41:38.641 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[11731], 00:41:38.641 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:41:38.641 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14353], 95.00th=[15270], 00:41:38.641 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56361], 99.95th=[98042], 00:41:38.641 | 99.99th=[98042] 00:41:38.641 bw ( KiB/s): min=18944, max=34048, per=31.91%, avg=27750.40, stdev=4417.93, samples=10 00:41:38.641 iops : min= 148, max= 266, avg=216.80, stdev=34.52, samples=10 00:41:38.641 lat (msec) : 10=9.30%, 20=87.29%, 50=0.74%, 100=2.67% 00:41:38.641 cpu : usr=92.52%, sys=6.94%, ctx=11, majf=0, minf=94 00:41:38.641 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:38.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:38.641 issued rwts: total=1086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:38.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:38.641 00:41:38.641 Run status group 0 (all jobs): 00:41:38.641 READ: bw=84.9MiB/s (89.1MB/s), 26.9MiB/s-29.4MiB/s (28.2MB/s-30.9MB/s), io=428MiB (449MB), run=5005-5044msec 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 bdev_null0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 [2024-11-17 11:36:02.442643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 bdev_null1 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 bdev_null2 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.641 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:38.642 { 00:41:38.642 "params": { 00:41:38.642 "name": "Nvme$subsystem", 00:41:38.642 "trtype": "$TEST_TRANSPORT", 00:41:38.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.642 "adrfam": "ipv4", 00:41:38.642 "trsvcid": "$NVMF_PORT", 00:41:38.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.642 "hdgst": ${hdgst:-false}, 00:41:38.642 "ddgst": ${ddgst:-false} 00:41:38.642 }, 00:41:38.642 "method": "bdev_nvme_attach_controller" 00:41:38.642 } 00:41:38.642 EOF 00:41:38.642 )") 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:38.642 { 00:41:38.642 "params": { 00:41:38.642 "name": "Nvme$subsystem", 00:41:38.642 "trtype": "$TEST_TRANSPORT", 00:41:38.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.642 "adrfam": "ipv4", 00:41:38.642 "trsvcid": "$NVMF_PORT", 00:41:38.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.642 "hdgst": ${hdgst:-false}, 00:41:38.642 "ddgst": ${ddgst:-false} 00:41:38.642 }, 00:41:38.642 "method": "bdev_nvme_attach_controller" 00:41:38.642 } 00:41:38.642 EOF 00:41:38.642 )") 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:38.642 { 00:41:38.642 "params": { 00:41:38.642 "name": "Nvme$subsystem", 00:41:38.642 "trtype": "$TEST_TRANSPORT", 00:41:38.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:38.642 "adrfam": "ipv4", 00:41:38.642 "trsvcid": "$NVMF_PORT", 00:41:38.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:38.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:38.642 "hdgst": ${hdgst:-false}, 00:41:38.642 "ddgst": ${ddgst:-false} 00:41:38.642 }, 00:41:38.642 "method": "bdev_nvme_attach_controller" 00:41:38.642 } 00:41:38.642 EOF 00:41:38.642 )") 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:38.642 "params": { 00:41:38.642 "name": "Nvme0", 00:41:38.642 "trtype": "tcp", 00:41:38.642 "traddr": "10.0.0.2", 00:41:38.642 "adrfam": "ipv4", 00:41:38.642 "trsvcid": "4420", 00:41:38.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:38.642 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:38.642 "hdgst": false, 00:41:38.642 "ddgst": false 00:41:38.642 }, 00:41:38.642 "method": "bdev_nvme_attach_controller" 00:41:38.642 },{ 00:41:38.642 "params": { 00:41:38.642 "name": "Nvme1", 00:41:38.642 "trtype": "tcp", 00:41:38.642 "traddr": "10.0.0.2", 00:41:38.642 "adrfam": "ipv4", 00:41:38.642 "trsvcid": "4420", 00:41:38.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:38.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:38.642 "hdgst": false, 00:41:38.642 "ddgst": false 00:41:38.642 }, 00:41:38.642 "method": "bdev_nvme_attach_controller" 00:41:38.642 },{ 00:41:38.642 "params": { 00:41:38.642 "name": "Nvme2", 00:41:38.642 "trtype": "tcp", 00:41:38.642 "traddr": "10.0.0.2", 00:41:38.642 "adrfam": "ipv4", 00:41:38.642 "trsvcid": "4420", 00:41:38.642 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:38.642 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:38.642 "hdgst": false, 00:41:38.642 "ddgst": false 00:41:38.642 }, 00:41:38.642 "method": "bdev_nvme_attach_controller" 00:41:38.642 }' 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:38.642 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:38.643 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:38.643 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:38.643 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:38.643 11:36:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:38.643 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:38.643 ... 00:41:38.643 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:38.643 ... 00:41:38.643 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:38.643 ... 00:41:38.643 fio-3.35 00:41:38.643 Starting 24 threads 00:41:50.846 00:41:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=467816: Sun Nov 17 11:36:13 2024 00:41:50.846 read: IOPS=451, BW=1804KiB/s (1847kB/s)(17.8MiB/10110msec) 00:41:50.846 slat (nsec): min=4364, max=85928, avg=37774.60, stdev=12151.74 00:41:50.846 clat (msec): min=32, max=393, avg=35.11, stdev=22.32 00:41:50.846 lat (msec): min=32, max=393, avg=35.15, stdev=22.32 00:41:50.846 clat percentiles (msec): 00:41:50.846 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.846 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.846 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.846 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 393], 99.95th=[ 393], 00:41:50.846 | 99.99th=[ 393] 00:41:50.846 bw ( KiB/s): min= 512, max= 1920, per=4.20%, avg=1817.60, stdev=312.43, samples=20 00:41:50.846 iops : min= 128, max= 480, avg=454.40, stdev=78.11, samples=20 00:41:50.846 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.846 cpu : usr=98.50%, sys=1.11%, ctx=11, majf=0, minf=19 00:41:50.846 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=467817: Sun Nov 17 11:36:13 2024 00:41:50.846 read: IOPS=451, BW=1807KiB/s (1850kB/s)(17.9MiB/10130msec) 00:41:50.846 slat (nsec): min=8211, max=72928, avg=26490.73, stdev=13309.75 00:41:50.846 clat (msec): min=24, max=243, avg=35.21, stdev=16.21 00:41:50.846 lat (msec): min=24, max=243, avg=35.24, stdev=16.21 00:41:50.846 clat percentiles (msec): 00:41:50.846 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.846 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.846 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:41:50.846 | 99.00th=[ 146], 99.50th=[ 184], 99.90th=[ 234], 99.95th=[ 236], 00:41:50.846 | 99.99th=[ 243] 00:41:50.846 bw ( KiB/s): min= 512, max= 1920, per=4.21%, avg=1824.00, stdev=313.19, samples=20 00:41:50.846 iops : min= 128, max= 480, avg=456.00, stdev=78.30, samples=20 00:41:50.846 lat (msec) : 50=98.95%, 250=1.05% 00:41:50.846 cpu : usr=97.35%, sys=1.68%, ctx=169, majf=0, minf=28 00:41:50.846 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=467818: Sun Nov 17 11:36:13 2024 00:41:50.846 read: IOPS=450, BW=1803KiB/s (1847kB/s)(17.8MiB/10114msec) 00:41:50.846 slat (nsec): min=3996, max=70083, avg=36418.32, stdev=10488.74 00:41:50.846 clat (msec): min=23, max=402, avg=35.16, stdev=22.81 00:41:50.846 lat (msec): min=23, max=402, avg=35.20, stdev=22.81 00:41:50.846 clat percentiles (msec): 00:41:50.846 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.846 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.846 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.846 | 99.00th=[ 44], 99.50th=[ 146], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.846 | 99.99th=[ 401] 00:41:50.846 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1817.20, stdev=344.00, samples=20 00:41:50.846 iops : min= 96, max= 512, avg=454.30, stdev=86.00, samples=20 00:41:50.846 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.846 cpu : usr=97.56%, sys=1.56%, ctx=129, majf=0, minf=13 00:41:50.846 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=467819: Sun Nov 17 11:36:13 2024 00:41:50.846 read: IOPS=458, BW=1833KiB/s (1877kB/s)(18.1MiB/10092msec) 00:41:50.846 slat (nsec): min=3799, max=68148, avg=22731.11, stdev=12801.22 00:41:50.846 clat (msec): min=9, max=207, avg=34.74, stdev=12.56 00:41:50.846 lat (msec): min=9, max=207, avg=34.76, stdev=12.56 00:41:50.846 clat percentiles (msec): 00:41:50.846 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.846 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.846 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:41:50.846 | 99.00th=[ 96], 99.50th=[ 136], 99.90th=[ 209], 99.95th=[ 209], 00:41:50.846 | 99.99th=[ 209] 00:41:50.846 bw ( KiB/s): min= 768, max= 1920, per=4.26%, avg=1843.20, stdev=257.34, samples=20 00:41:50.846 iops : min= 192, max= 480, avg=460.80, stdev=64.34, samples=20 00:41:50.846 lat (msec) : 10=0.35%, 50=98.57%, 100=0.39%, 250=0.69% 00:41:50.846 cpu : usr=97.43%, sys=1.62%, ctx=120, majf=0, minf=26 00:41:50.846 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=467820: Sun Nov 17 11:36:13 2024 00:41:50.846 read: IOPS=454, BW=1818KiB/s (1861kB/s)(17.9MiB/10070msec) 00:41:50.846 slat (nsec): min=6391, max=65337, avg=32254.04, stdev=11164.40 00:41:50.846 clat (msec): min=32, max=225, avg=34.94, stdev=14.77 00:41:50.846 lat (msec): min=32, max=225, avg=34.97, stdev=14.77 00:41:50.846 clat percentiles (msec): 00:41:50.846 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.846 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.846 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.846 | 99.00th=[ 96], 99.50th=[ 182], 99.90th=[ 226], 99.95th=[ 226], 00:41:50.846 | 99.99th=[ 226] 00:41:50.846 bw ( KiB/s): min= 512, max= 1920, per=4.21%, avg=1824.00, stdev=313.19, samples=20 00:41:50.846 iops : min= 128, max= 480, avg=456.00, stdev=78.30, samples=20 00:41:50.846 lat (msec) : 50=98.95%, 100=0.35%, 250=0.70% 00:41:50.846 cpu : usr=98.36%, sys=1.18%, ctx=21, majf=0, minf=16 00:41:50.846 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.846 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=467821: Sun Nov 17 11:36:13 2024 00:41:50.846 read: IOPS=453, BW=1812KiB/s (1856kB/s)(17.9MiB/10135msec) 00:41:50.846 slat (nsec): min=8085, max=69134, avg=30474.83, stdev=11216.50 00:41:50.846 clat (msec): min=19, max=225, avg=35.06, stdev=15.62 00:41:50.846 lat (msec): min=19, max=225, avg=35.09, stdev=15.62 00:41:50.846 clat percentiles (msec): 00:41:50.846 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.846 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.846 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 142], 99.50th=[ 182], 99.90th=[ 226], 99.95th=[ 226], 00:41:50.847 | 99.99th=[ 226] 00:41:50.847 bw ( KiB/s): min= 640, max= 1920, per=4.23%, avg=1830.40, stdev=285.01, samples=20 00:41:50.847 iops : min= 160, max= 480, avg=457.60, stdev=71.25, samples=20 00:41:50.847 lat (msec) : 20=0.35%, 50=98.61%, 250=1.05% 00:41:50.847 cpu : usr=96.76%, sys=1.95%, ctx=228, majf=0, minf=26 00:41:50.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename0: (groupid=0, jobs=1): err= 0: pid=467822: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.8MiB/10111msec) 00:41:50.847 slat (nsec): min=8909, max=76347, avg=32903.85, stdev=13355.60 00:41:50.847 clat (msec): min=24, max=395, avg=35.20, stdev=22.37 00:41:50.847 lat (msec): min=24, max=395, avg=35.24, stdev=22.37 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.847 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 397], 99.95th=[ 397], 00:41:50.847 | 99.99th=[ 397] 00:41:50.847 bw ( KiB/s): min= 513, max= 1920, per=4.20%, avg=1817.65, stdev=312.21, samples=20 00:41:50.847 iops : min= 128, max= 480, avg=454.40, stdev=78.11, samples=20 00:41:50.847 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.847 cpu : usr=96.59%, sys=2.05%, ctx=327, majf=0, minf=30 00:41:50.847 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename0: (groupid=0, jobs=1): err= 0: pid=467823: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=451, BW=1806KiB/s (1850kB/s)(17.9MiB/10133msec) 00:41:50.847 slat (usec): min=5, max=143, avg=34.29, stdev=31.52 00:41:50.847 clat (msec): min=24, max=242, avg=35.12, stdev=16.17 00:41:50.847 lat (msec): min=24, max=242, avg=35.15, stdev=16.17 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.847 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 146], 99.50th=[ 182], 99.90th=[ 234], 99.95th=[ 234], 00:41:50.847 | 99.99th=[ 243] 00:41:50.847 bw ( KiB/s): min= 512, max= 1920, per=4.21%, avg=1822.85, stdev=312.86, samples=20 00:41:50.847 iops : min= 128, max= 480, avg=455.70, stdev=78.21, samples=20 00:41:50.847 lat (msec) : 50=98.95%, 250=1.05% 00:41:50.847 cpu : usr=98.36%, sys=1.16%, ctx=23, majf=0, minf=33 00:41:50.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename1: (groupid=0, jobs=1): err= 0: pid=467824: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=453, BW=1812KiB/s (1856kB/s)(17.9MiB/10135msec) 00:41:50.847 slat (nsec): min=8831, max=66437, avg=33038.12, stdev=10568.83 00:41:50.847 clat (msec): min=19, max=231, avg=35.03, stdev=15.67 00:41:50.847 lat (msec): min=19, max=231, avg=35.06, stdev=15.67 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.847 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 142], 99.50th=[ 182], 99.90th=[ 226], 99.95th=[ 226], 00:41:50.847 | 99.99th=[ 232] 00:41:50.847 bw ( KiB/s): min= 640, max= 1920, per=4.23%, avg=1830.40, stdev=285.01, samples=20 00:41:50.847 iops : min= 160, max= 480, avg=457.60, stdev=71.25, samples=20 00:41:50.847 lat (msec) : 20=0.35%, 50=98.61%, 250=1.05% 00:41:50.847 cpu : usr=98.22%, sys=1.24%, ctx=47, majf=0, minf=33 00:41:50.847 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename1: (groupid=0, jobs=1): err= 0: pid=467825: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=450, BW=1802KiB/s (1846kB/s)(17.8MiB/10120msec) 00:41:50.847 slat (usec): min=4, max=118, avg=47.86, stdev=25.10 00:41:50.847 clat (msec): min=24, max=410, avg=35.08, stdev=22.86 00:41:50.847 lat (msec): min=24, max=410, avg=35.12, stdev=22.86 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:50.847 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.847 | 99.99th=[ 409] 00:41:50.847 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1817.20, stdev=344.00, samples=20 00:41:50.847 iops : min= 96, max= 512, avg=454.30, stdev=86.00, samples=20 00:41:50.847 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.847 cpu : usr=97.33%, sys=1.73%, ctx=124, majf=0, minf=31 00:41:50.847 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename1: (groupid=0, jobs=1): err= 0: pid=467826: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.9MiB/10071msec) 00:41:50.847 slat (nsec): min=8827, max=90746, avg=33559.87, stdev=9880.47 00:41:50.847 clat (msec): min=24, max=225, avg=34.91, stdev=14.79 00:41:50.847 lat (msec): min=24, max=225, avg=34.94, stdev=14.78 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.847 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 96], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:41:50.847 | 99.99th=[ 226] 00:41:50.847 bw ( KiB/s): min= 512, max= 1920, per=4.21%, avg=1824.00, stdev=313.19, samples=20 00:41:50.847 iops : min= 128, max= 480, avg=456.00, stdev=78.30, samples=20 00:41:50.847 lat (msec) : 50=98.95%, 100=0.35%, 250=0.70% 00:41:50.847 cpu : usr=97.76%, sys=1.61%, ctx=84, majf=0, minf=23 00:41:50.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename1: (groupid=0, jobs=1): err= 0: pid=467827: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=460, BW=1840KiB/s (1884kB/s)(18.0MiB/10016msec) 00:41:50.847 slat (usec): min=5, max=131, avg=27.68, stdev=17.40 00:41:50.847 clat (msec): min=19, max=246, avg=34.56, stdev=13.25 00:41:50.847 lat (msec): min=19, max=246, avg=34.59, stdev=13.25 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.847 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 46], 99.50th=[ 120], 99.90th=[ 241], 99.95th=[ 241], 00:41:50.847 | 99.99th=[ 247] 00:41:50.847 bw ( KiB/s): min= 769, max= 1920, per=4.24%, avg=1836.85, stdev=256.71, samples=20 00:41:50.847 iops : min= 192, max= 480, avg=459.20, stdev=64.23, samples=20 00:41:50.847 lat (msec) : 20=0.35%, 50=98.96%, 250=0.69% 00:41:50.847 cpu : usr=96.60%, sys=2.20%, ctx=344, majf=0, minf=32 00:41:50.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.847 filename1: (groupid=0, jobs=1): err= 0: pid=467828: Sun Nov 17 11:36:13 2024 00:41:50.847 read: IOPS=453, BW=1813KiB/s (1856kB/s)(17.8MiB/10062msec) 00:41:50.847 slat (usec): min=10, max=127, avg=64.04, stdev=27.85 00:41:50.847 clat (msec): min=31, max=400, avg=34.73, stdev=22.02 00:41:50.847 lat (msec): min=31, max=400, avg=34.80, stdev=22.02 00:41:50.847 clat percentiles (msec): 00:41:50.847 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:50.847 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.847 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.847 | 99.00th=[ 41], 99.50th=[ 97], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.847 | 99.99th=[ 401] 00:41:50.847 bw ( KiB/s): min= 384, max= 1920, per=4.20%, avg=1817.60, stdev=341.45, samples=20 00:41:50.847 iops : min= 96, max= 480, avg=454.40, stdev=85.36, samples=20 00:41:50.847 lat (msec) : 50=99.30%, 100=0.35%, 500=0.35% 00:41:50.847 cpu : usr=98.52%, sys=1.03%, ctx=15, majf=0, minf=22 00:41:50.847 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.847 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename1: (groupid=0, jobs=1): err= 0: pid=467829: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=451, BW=1804KiB/s (1847kB/s)(17.8MiB/10110msec) 00:41:50.848 slat (usec): min=5, max=120, avg=38.24, stdev=19.66 00:41:50.848 clat (msec): min=31, max=402, avg=35.12, stdev=22.72 00:41:50.848 lat (msec): min=31, max=402, avg=35.15, stdev=22.71 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 42], 99.50th=[ 142], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.848 | 99.99th=[ 401] 00:41:50.848 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1817.60, stdev=343.96, samples=20 00:41:50.848 iops : min= 96, max= 512, avg=454.40, stdev=85.99, samples=20 00:41:50.848 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.848 cpu : usr=98.25%, sys=1.23%, ctx=33, majf=0, minf=28 00:41:50.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename1: (groupid=0, jobs=1): err= 0: pid=467830: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=451, BW=1805KiB/s (1848kB/s)(17.8MiB/10106msec) 00:41:50.848 slat (usec): min=6, max=108, avg=38.93, stdev=15.70 00:41:50.848 clat (msec): min=25, max=401, avg=35.09, stdev=22.33 00:41:50.848 lat (msec): min=25, max=401, avg=35.13, stdev=22.33 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 393], 99.95th=[ 393], 00:41:50.848 | 99.99th=[ 401] 00:41:50.848 bw ( KiB/s): min= 512, max= 1920, per=4.20%, avg=1817.60, stdev=312.43, samples=20 00:41:50.848 iops : min= 128, max= 480, avg=454.40, stdev=78.11, samples=20 00:41:50.848 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.848 cpu : usr=97.97%, sys=1.42%, ctx=93, majf=0, minf=31 00:41:50.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename1: (groupid=0, jobs=1): err= 0: pid=467831: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.8MiB/10113msec) 00:41:50.848 slat (nsec): min=5592, max=85451, avg=37914.53, stdev=11973.79 00:41:50.848 clat (msec): min=32, max=396, avg=35.12, stdev=22.45 00:41:50.848 lat (msec): min=32, max=396, avg=35.16, stdev=22.45 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 397], 99.95th=[ 397], 00:41:50.848 | 99.99th=[ 397] 00:41:50.848 bw ( KiB/s): min= 512, max= 1920, per=4.20%, avg=1817.60, stdev=312.43, samples=20 00:41:50.848 iops : min= 128, max= 480, avg=454.40, stdev=78.11, samples=20 00:41:50.848 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.848 cpu : usr=98.45%, sys=1.13%, ctx=13, majf=0, minf=17 00:41:50.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename2: (groupid=0, jobs=1): err= 0: pid=467832: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=454, BW=1818KiB/s (1861kB/s)(17.9MiB/10070msec) 00:41:50.848 slat (usec): min=8, max=120, avg=30.05, stdev=10.77 00:41:50.848 clat (msec): min=23, max=225, avg=34.95, stdev=14.78 00:41:50.848 lat (msec): min=23, max=225, avg=34.98, stdev=14.78 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 96], 99.50th=[ 182], 99.90th=[ 226], 99.95th=[ 226], 00:41:50.848 | 99.99th=[ 226] 00:41:50.848 bw ( KiB/s): min= 512, max= 1920, per=4.21%, avg=1824.00, stdev=313.19, samples=20 00:41:50.848 iops : min= 128, max= 480, avg=456.00, stdev=78.30, samples=20 00:41:50.848 lat (msec) : 50=98.95%, 100=0.35%, 250=0.70% 00:41:50.848 cpu : usr=98.18%, sys=1.34%, ctx=32, majf=0, minf=20 00:41:50.848 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename2: (groupid=0, jobs=1): err= 0: pid=467833: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=454, BW=1818KiB/s (1861kB/s)(17.9MiB/10070msec) 00:41:50.848 slat (nsec): min=8768, max=65100, avg=32558.37, stdev=8689.43 00:41:50.848 clat (msec): min=32, max=225, avg=34.93, stdev=14.78 00:41:50.848 lat (msec): min=32, max=225, avg=34.96, stdev=14.78 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 96], 99.50th=[ 184], 99.90th=[ 226], 99.95th=[ 226], 00:41:50.848 | 99.99th=[ 226] 00:41:50.848 bw ( KiB/s): min= 512, max= 1920, per=4.21%, avg=1824.00, stdev=313.19, samples=20 00:41:50.848 iops : min= 128, max= 480, avg=456.00, stdev=78.30, samples=20 00:41:50.848 lat (msec) : 50=98.95%, 100=0.35%, 250=0.70% 00:41:50.848 cpu : usr=98.69%, sys=0.90%, ctx=13, majf=0, minf=18 00:41:50.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename2: (groupid=0, jobs=1): err= 0: pid=467834: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.8MiB/10111msec) 00:41:50.848 slat (nsec): min=8531, max=85800, avg=37078.17, stdev=11421.79 00:41:50.848 clat (msec): min=32, max=399, avg=35.16, stdev=22.64 00:41:50.848 lat (msec): min=32, max=399, avg=35.20, stdev=22.64 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.848 | 99.99th=[ 401] 00:41:50.848 bw ( KiB/s): min= 384, max= 1920, per=4.20%, avg=1817.60, stdev=341.45, samples=20 00:41:50.848 iops : min= 96, max= 480, avg=454.40, stdev=85.36, samples=20 00:41:50.848 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.848 cpu : usr=98.56%, sys=1.05%, ctx=15, majf=0, minf=19 00:41:50.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename2: (groupid=0, jobs=1): err= 0: pid=467835: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=453, BW=1813KiB/s (1856kB/s)(17.8MiB/10063msec) 00:41:50.848 slat (usec): min=8, max=117, avg=41.67, stdev=22.01 00:41:50.848 clat (msec): min=32, max=400, avg=34.92, stdev=21.99 00:41:50.848 lat (msec): min=32, max=400, avg=34.96, stdev=21.99 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 41], 99.50th=[ 97], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.848 | 99.99th=[ 401] 00:41:50.848 bw ( KiB/s): min= 384, max= 1920, per=4.20%, avg=1817.60, stdev=341.45, samples=20 00:41:50.848 iops : min= 96, max= 480, avg=454.40, stdev=85.36, samples=20 00:41:50.848 lat (msec) : 50=99.30%, 100=0.35%, 500=0.35% 00:41:50.848 cpu : usr=97.25%, sys=1.68%, ctx=126, majf=0, minf=19 00:41:50.848 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.848 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.848 filename2: (groupid=0, jobs=1): err= 0: pid=467836: Sun Nov 17 11:36:13 2024 00:41:50.848 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.8MiB/10112msec) 00:41:50.848 slat (nsec): min=4093, max=70050, avg=32677.47, stdev=11373.55 00:41:50.848 clat (msec): min=20, max=402, avg=35.17, stdev=22.72 00:41:50.848 lat (msec): min=20, max=402, avg=35.20, stdev=22.72 00:41:50.848 clat percentiles (msec): 00:41:50.848 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.848 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.848 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.848 | 99.00th=[ 42], 99.50th=[ 142], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.848 | 99.99th=[ 401] 00:41:50.848 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1817.60, stdev=343.96, samples=20 00:41:50.848 iops : min= 96, max= 512, avg=454.40, stdev=85.99, samples=20 00:41:50.849 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.849 cpu : usr=98.52%, sys=1.09%, ctx=15, majf=0, minf=24 00:41:50.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.849 filename2: (groupid=0, jobs=1): err= 0: pid=467837: Sun Nov 17 11:36:13 2024 00:41:50.849 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.8MiB/10111msec) 00:41:50.849 slat (usec): min=4, max=140, avg=44.24, stdev=25.82 00:41:50.849 clat (msec): min=19, max=416, avg=35.06, stdev=22.79 00:41:50.849 lat (msec): min=19, max=416, avg=35.11, stdev=22.78 00:41:50.849 clat percentiles (msec): 00:41:50.849 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:50.849 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.849 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.849 | 99.00th=[ 42], 99.50th=[ 142], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.849 | 99.99th=[ 418] 00:41:50.849 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1817.60, stdev=343.96, samples=20 00:41:50.849 iops : min= 96, max= 512, avg=454.40, stdev=85.99, samples=20 00:41:50.849 lat (msec) : 20=0.04%, 50=99.25%, 250=0.35%, 500=0.35% 00:41:50.849 cpu : usr=98.43%, sys=1.16%, ctx=13, majf=0, minf=18 00:41:50.849 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:41:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.849 filename2: (groupid=0, jobs=1): err= 0: pid=467838: Sun Nov 17 11:36:13 2024 00:41:50.849 read: IOPS=451, BW=1804KiB/s (1847kB/s)(17.8MiB/10110msec) 00:41:50.849 slat (nsec): min=6726, max=70667, avg=31897.51, stdev=11888.54 00:41:50.849 clat (msec): min=32, max=402, avg=35.17, stdev=22.71 00:41:50.849 lat (msec): min=32, max=402, avg=35.20, stdev=22.71 00:41:50.849 clat percentiles (msec): 00:41:50.849 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:41:50.849 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.849 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.849 | 99.00th=[ 42], 99.50th=[ 142], 99.90th=[ 401], 99.95th=[ 401], 00:41:50.849 | 99.99th=[ 401] 00:41:50.849 bw ( KiB/s): min= 384, max= 2048, per=4.20%, avg=1817.60, stdev=343.96, samples=20 00:41:50.849 iops : min= 96, max= 512, avg=454.40, stdev=85.99, samples=20 00:41:50.849 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.849 cpu : usr=98.56%, sys=1.05%, ctx=12, majf=0, minf=23 00:41:50.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.849 filename2: (groupid=0, jobs=1): err= 0: pid=467839: Sun Nov 17 11:36:13 2024 00:41:50.849 read: IOPS=451, BW=1805KiB/s (1849kB/s)(17.8MiB/10103msec) 00:41:50.849 slat (usec): min=8, max=123, avg=65.88, stdev=25.59 00:41:50.849 clat (msec): min=25, max=398, avg=34.85, stdev=22.21 00:41:50.849 lat (msec): min=25, max=399, avg=34.92, stdev=22.21 00:41:50.849 clat percentiles (msec): 00:41:50.849 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:41:50.849 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:41:50.849 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:41:50.849 | 99.00th=[ 43], 99.50th=[ 146], 99.90th=[ 393], 99.95th=[ 393], 00:41:50.849 | 99.99th=[ 401] 00:41:50.849 bw ( KiB/s): min= 512, max= 1920, per=4.20%, avg=1817.60, stdev=312.43, samples=20 00:41:50.849 iops : min= 128, max= 480, avg=454.40, stdev=78.11, samples=20 00:41:50.849 lat (msec) : 50=99.30%, 250=0.35%, 500=0.35% 00:41:50.849 cpu : usr=98.37%, sys=1.20%, ctx=13, majf=0, minf=20 00:41:50.849 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:50.849 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:50.849 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:50.849 00:41:50.849 Run status group 0 (all jobs): 00:41:50.849 READ: bw=42.3MiB/s (44.3MB/s), 1802KiB/s-1840KiB/s (1846kB/s-1884kB/s), io=429MiB (449MB), run=10016-10135msec 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 bdev_null0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.849 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.850 [2024-11-17 11:36:14.162454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.850 bdev_null1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:50.850 { 00:41:50.850 "params": { 00:41:50.850 "name": "Nvme$subsystem", 00:41:50.850 "trtype": "$TEST_TRANSPORT", 00:41:50.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.850 "adrfam": "ipv4", 00:41:50.850 "trsvcid": "$NVMF_PORT", 00:41:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.850 "hdgst": ${hdgst:-false}, 00:41:50.850 "ddgst": ${ddgst:-false} 00:41:50.850 }, 00:41:50.850 "method": "bdev_nvme_attach_controller" 00:41:50.850 } 00:41:50.850 EOF 00:41:50.850 )") 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:50.850 { 00:41:50.850 "params": { 00:41:50.850 "name": "Nvme$subsystem", 00:41:50.850 "trtype": "$TEST_TRANSPORT", 00:41:50.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:50.850 "adrfam": "ipv4", 00:41:50.850 "trsvcid": "$NVMF_PORT", 00:41:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:50.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:50.850 "hdgst": ${hdgst:-false}, 00:41:50.850 "ddgst": ${ddgst:-false} 00:41:50.850 }, 00:41:50.850 "method": "bdev_nvme_attach_controller" 00:41:50.850 } 00:41:50.850 EOF 00:41:50.850 )") 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:50.850 "params": { 00:41:50.850 "name": "Nvme0", 00:41:50.850 "trtype": "tcp", 00:41:50.850 "traddr": "10.0.0.2", 00:41:50.850 "adrfam": "ipv4", 00:41:50.850 "trsvcid": "4420", 00:41:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:50.850 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:50.850 "hdgst": false, 00:41:50.850 "ddgst": false 00:41:50.850 }, 00:41:50.850 "method": "bdev_nvme_attach_controller" 00:41:50.850 },{ 00:41:50.850 "params": { 00:41:50.850 "name": "Nvme1", 00:41:50.850 "trtype": "tcp", 00:41:50.850 "traddr": "10.0.0.2", 00:41:50.850 "adrfam": "ipv4", 00:41:50.850 "trsvcid": "4420", 00:41:50.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:50.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:50.850 "hdgst": false, 00:41:50.850 "ddgst": false 00:41:50.850 }, 00:41:50.850 "method": "bdev_nvme_attach_controller" 00:41:50.850 }' 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:50.850 11:36:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:50.850 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:50.850 ... 00:41:50.850 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:50.850 ... 00:41:50.850 fio-3.35 00:41:50.850 Starting 4 threads 00:41:56.114 00:41:56.114 filename0: (groupid=0, jobs=1): err= 0: pid=469711: Sun Nov 17 11:36:20 2024 00:41:56.114 read: IOPS=1893, BW=14.8MiB/s (15.5MB/s)(74.0MiB/5001msec) 00:41:56.114 slat (nsec): min=3988, max=73462, avg=19135.49, stdev=10455.83 00:41:56.114 clat (usec): min=1069, max=7414, avg=4159.26, stdev=503.01 00:41:56.114 lat (usec): min=1077, max=7442, avg=4178.39, stdev=503.46 00:41:56.114 clat percentiles (usec): 00:41:56.114 | 1.00th=[ 2606], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3916], 00:41:56.114 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:56.114 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:41:56.114 | 99.00th=[ 6063], 99.50th=[ 6521], 99.90th=[ 7308], 99.95th=[ 7373], 00:41:56.114 | 99.99th=[ 7439] 00:41:56.114 bw ( KiB/s): min=14976, max=15376, per=25.34%, avg=15168.00, stdev=121.06, samples=9 00:41:56.114 iops : min= 1872, max= 1922, avg=1896.00, stdev=15.13, samples=9 00:41:56.114 lat (msec) : 2=0.36%, 4=25.68%, 10=73.96% 00:41:56.114 cpu : usr=93.60%, sys=4.60%, ctx=203, majf=0, minf=38 00:41:56.114 IO depths : 1=0.6%, 2=14.1%, 4=57.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 issued rwts: total=9469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.114 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.114 filename0: (groupid=0, jobs=1): err= 0: pid=469712: Sun Nov 17 11:36:20 2024 00:41:56.114 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5001msec) 00:41:56.114 slat (nsec): min=5581, max=72679, avg=20414.77, stdev=10649.57 00:41:56.114 clat (usec): min=868, max=7829, avg=4216.32, stdev=582.25 00:41:56.114 lat (usec): min=880, max=7847, avg=4236.73, stdev=582.33 00:41:56.114 clat percentiles (usec): 00:41:56.114 | 1.00th=[ 2147], 5.00th=[ 3458], 10.00th=[ 3785], 20.00th=[ 3982], 00:41:56.114 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:41:56.114 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5145], 00:41:56.114 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7504], 00:41:56.114 | 99.99th=[ 7832] 00:41:56.114 bw ( KiB/s): min=14528, max=15120, per=24.91%, avg=14909.89, stdev=203.89, samples=9 00:41:56.114 iops : min= 1816, max= 1890, avg=1863.67, stdev=25.52, samples=9 00:41:56.114 lat (usec) : 1000=0.03% 00:41:56.114 lat (msec) : 2=0.73%, 4=20.03%, 10=79.21% 00:41:56.114 cpu : usr=95.62%, sys=3.88%, ctx=11, majf=0, minf=35 00:41:56.114 IO depths : 1=0.3%, 2=16.6%, 4=56.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 issued rwts: total=9325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.114 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.114 filename1: (groupid=0, jobs=1): err= 0: pid=469713: Sun Nov 17 11:36:20 2024 00:41:56.114 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5001msec) 00:41:56.114 slat (nsec): min=5689, max=72725, avg=20777.00, stdev=11571.03 00:41:56.114 clat (usec): min=852, max=7696, avg=4230.81, stdev=638.49 00:41:56.114 lat (usec): min=865, max=7723, avg=4251.59, stdev=638.73 00:41:56.114 clat percentiles (usec): 00:41:56.114 | 1.00th=[ 2073], 5.00th=[ 3392], 10.00th=[ 3752], 20.00th=[ 3982], 00:41:56.114 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:56.114 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5342], 00:41:56.114 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7373], 99.95th=[ 7570], 00:41:56.114 | 99.99th=[ 7701] 00:41:56.114 bw ( KiB/s): min=14512, max=15248, per=24.82%, avg=14857.30, stdev=219.06, samples=10 00:41:56.114 iops : min= 1814, max= 1906, avg=1857.10, stdev=27.39, samples=10 00:41:56.114 lat (usec) : 1000=0.09% 00:41:56.114 lat (msec) : 2=0.81%, 4=19.87%, 10=79.24% 00:41:56.114 cpu : usr=94.76%, sys=4.78%, ctx=7, majf=0, minf=36 00:41:56.114 IO depths : 1=0.2%, 2=15.6%, 4=57.0%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 issued rwts: total=9292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.114 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.114 filename1: (groupid=0, jobs=1): err= 0: pid=469714: Sun Nov 17 11:36:20 2024 00:41:56.114 read: IOPS=1869, BW=14.6MiB/s (15.3MB/s)(73.1MiB/5004msec) 00:41:56.114 slat (nsec): min=3859, max=72735, avg=19348.73, stdev=11411.57 00:41:56.114 clat (usec): min=898, max=9364, avg=4209.60, stdev=547.10 00:41:56.114 lat (usec): min=911, max=9391, avg=4228.95, stdev=547.30 00:41:56.114 clat percentiles (usec): 00:41:56.114 | 1.00th=[ 2671], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3982], 00:41:56.114 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:41:56.114 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5080], 00:41:56.114 | 99.00th=[ 6063], 99.50th=[ 6718], 99.90th=[ 7635], 99.95th=[ 9241], 00:41:56.114 | 99.99th=[ 9372] 00:41:56.114 bw ( KiB/s): min=14560, max=15344, per=24.99%, avg=14958.40, stdev=229.82, samples=10 00:41:56.114 iops : min= 1820, max= 1918, avg=1869.80, stdev=28.73, samples=10 00:41:56.114 lat (usec) : 1000=0.05% 00:41:56.114 lat (msec) : 2=0.45%, 4=21.94%, 10=77.56% 00:41:56.114 cpu : usr=95.14%, sys=4.40%, ctx=7, majf=0, minf=85 00:41:56.114 IO depths : 1=0.2%, 2=16.0%, 4=56.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:56.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:56.114 issued rwts: total=9357,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:56.114 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:56.114 00:41:56.114 Run status group 0 (all jobs): 00:41:56.114 READ: bw=58.5MiB/s (61.3MB/s), 14.5MiB/s-14.8MiB/s (15.2MB/s-15.5MB/s), io=293MiB (307MB), run=5001-5004msec 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 00:41:56.115 real 0m24.332s 00:41:56.115 user 4m34.645s 00:41:56.115 sys 0m6.455s 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 ************************************ 00:41:56.115 END TEST fio_dif_rand_params 00:41:56.115 ************************************ 00:41:56.115 11:36:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:56.115 11:36:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:56.115 11:36:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 ************************************ 00:41:56.115 START TEST fio_dif_digest 00:41:56.115 ************************************ 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 bdev_null0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:56.115 [2024-11-17 11:36:20.641323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:56.115 { 00:41:56.115 "params": { 00:41:56.115 "name": "Nvme$subsystem", 00:41:56.115 "trtype": "$TEST_TRANSPORT", 00:41:56.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:56.115 "adrfam": "ipv4", 00:41:56.115 "trsvcid": "$NVMF_PORT", 00:41:56.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:56.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:56.115 "hdgst": ${hdgst:-false}, 00:41:56.115 "ddgst": ${ddgst:-false} 00:41:56.115 }, 00:41:56.115 "method": "bdev_nvme_attach_controller" 00:41:56.115 } 00:41:56.115 EOF 00:41:56.115 )") 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:56.115 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:56.116 "params": { 00:41:56.116 "name": "Nvme0", 00:41:56.116 "trtype": "tcp", 00:41:56.116 "traddr": "10.0.0.2", 00:41:56.116 "adrfam": "ipv4", 00:41:56.116 "trsvcid": "4420", 00:41:56.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:56.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:56.116 "hdgst": true, 00:41:56.116 "ddgst": true 00:41:56.116 }, 00:41:56.116 "method": "bdev_nvme_attach_controller" 00:41:56.116 }' 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:56.116 11:36:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:56.374 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:56.374 ... 00:41:56.374 fio-3.35 00:41:56.374 Starting 3 threads 00:42:08.567 00:42:08.567 filename0: (groupid=0, jobs=1): err= 0: pid=470469: Sun Nov 17 11:36:31 2024 00:42:08.567 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(268MiB/10044msec) 00:42:08.567 slat (nsec): min=4320, max=42234, avg=16202.26, stdev=4197.34 00:42:08.567 clat (usec): min=10722, max=54055, avg=14040.59, stdev=1490.61 00:42:08.567 lat (usec): min=10737, max=54082, avg=14056.80, stdev=1490.55 00:42:08.567 clat percentiles (usec): 00:42:08.567 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:42:08.567 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:42:08.567 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:42:08.567 | 99.00th=[16450], 99.50th=[16712], 99.90th=[21365], 99.95th=[46924], 00:42:08.567 | 99.99th=[54264] 00:42:08.567 bw ( KiB/s): min=26624, max=27904, per=34.15%, avg=27366.40, stdev=397.46, samples=20 00:42:08.567 iops : min= 208, max= 218, avg=213.80, stdev= 3.11, samples=20 00:42:08.567 lat (msec) : 20=99.86%, 50=0.09%, 100=0.05% 00:42:08.567 cpu : usr=90.78%, sys=6.95%, ctx=371, majf=0, minf=84 00:42:08.567 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.567 issued rwts: total=2140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:08.567 filename0: (groupid=0, jobs=1): err= 0: pid=470470: Sun Nov 17 11:36:31 2024 00:42:08.567 read: IOPS=209, BW=26.2MiB/s (27.5MB/s)(264MiB/10047msec) 00:42:08.567 slat (nsec): min=4206, max=48032, avg=13889.78, stdev=1751.70 00:42:08.567 clat (usec): min=11131, max=49492, avg=14254.09, stdev=1449.60 00:42:08.567 lat (usec): min=11144, max=49506, avg=14267.98, stdev=1449.49 00:42:08.567 clat percentiles (usec): 00:42:08.567 | 1.00th=[11994], 5.00th=[12649], 10.00th=[13042], 20.00th=[13435], 00:42:08.567 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:42:08.567 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:42:08.567 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17695], 99.95th=[49546], 00:42:08.567 | 99.99th=[49546] 00:42:08.567 bw ( KiB/s): min=26368, max=28160, per=33.64%, avg=26956.80, stdev=432.38, samples=20 00:42:08.567 iops : min= 206, max= 220, avg=210.60, stdev= 3.38, samples=20 00:42:08.567 lat (msec) : 20=99.91%, 50=0.09% 00:42:08.567 cpu : usr=94.36%, sys=5.14%, ctx=27, majf=0, minf=163 00:42:08.567 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.567 issued rwts: total=2109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:08.567 filename0: (groupid=0, jobs=1): err= 0: pid=470471: Sun Nov 17 11:36:31 2024 00:42:08.567 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10047msec) 00:42:08.567 slat (nsec): min=4100, max=25886, avg=13548.51, stdev=1364.44 00:42:08.567 clat (usec): min=10621, max=51111, avg=14729.84, stdev=1473.16 00:42:08.567 lat (usec): min=10634, max=51125, avg=14743.39, stdev=1473.11 00:42:08.567 clat percentiles (usec): 00:42:08.567 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:42:08.567 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:42:08.567 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16450], 00:42:08.567 | 99.00th=[17171], 99.50th=[17433], 99.90th=[21365], 99.95th=[47973], 00:42:08.567 | 99.99th=[51119] 00:42:08.567 bw ( KiB/s): min=25344, max=26880, per=32.55%, avg=26086.40, stdev=406.05, samples=20 00:42:08.567 iops : min= 198, max= 210, avg=203.80, stdev= 3.17, samples=20 00:42:08.567 lat (msec) : 20=99.80%, 50=0.15%, 100=0.05% 00:42:08.567 cpu : usr=94.56%, sys=4.97%, ctx=18, majf=0, minf=115 00:42:08.567 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:08.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:08.567 issued rwts: total=2041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:08.567 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:08.567 00:42:08.567 Run status group 0 (all jobs): 00:42:08.567 READ: bw=78.3MiB/s (82.1MB/s), 25.4MiB/s-26.6MiB/s (26.6MB/s-27.9MB/s), io=786MiB (824MB), run=10044-10047msec 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.567 00:42:08.567 real 0m11.097s 00:42:08.567 user 0m29.192s 00:42:08.567 sys 0m2.002s 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:08.567 11:36:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:08.567 ************************************ 00:42:08.567 END TEST fio_dif_digest 00:42:08.567 ************************************ 00:42:08.567 11:36:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:08.567 11:36:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.567 rmmod nvme_tcp 00:42:08.567 rmmod nvme_fabrics 00:42:08.567 rmmod nvme_keyring 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 463810 ']' 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 463810 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 463810 ']' 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 463810 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463810 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463810' 00:42:08.567 killing process with pid 463810 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@973 -- # kill 463810 00:42:08.567 11:36:31 nvmf_dif -- common/autotest_common.sh@978 -- # wait 463810 00:42:08.567 11:36:31 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:08.567 11:36:32 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:08.567 Waiting for block devices as requested 00:42:08.567 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:08.826 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:08.826 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:09.085 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:09.085 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:09.085 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:09.344 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:09.344 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:09.344 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:09.344 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:09.602 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:09.602 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:09.602 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:09.602 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:09.861 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:09.861 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:09.861 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:10.121 11:36:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.121 11:36:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:10.121 11:36:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.025 11:36:36 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:12.025 00:42:12.025 real 1m7.108s 00:42:12.025 user 6m31.168s 00:42:12.025 sys 0m18.228s 00:42:12.025 11:36:36 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:12.025 11:36:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:12.025 ************************************ 00:42:12.025 END TEST nvmf_dif 00:42:12.025 ************************************ 00:42:12.025 11:36:36 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.025 11:36:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:12.025 11:36:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:12.025 11:36:36 -- common/autotest_common.sh@10 -- # set +x 00:42:12.287 ************************************ 00:42:12.287 START TEST nvmf_abort_qd_sizes 00:42:12.287 ************************************ 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.287 * Looking for test storage... 00:42:12.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.287 --rc genhtml_branch_coverage=1 00:42:12.287 --rc genhtml_function_coverage=1 00:42:12.287 --rc genhtml_legend=1 00:42:12.287 --rc geninfo_all_blocks=1 00:42:12.287 --rc geninfo_unexecuted_blocks=1 00:42:12.287 00:42:12.287 ' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.287 --rc genhtml_branch_coverage=1 00:42:12.287 --rc genhtml_function_coverage=1 00:42:12.287 --rc genhtml_legend=1 00:42:12.287 --rc geninfo_all_blocks=1 00:42:12.287 --rc geninfo_unexecuted_blocks=1 00:42:12.287 00:42:12.287 ' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.287 --rc genhtml_branch_coverage=1 00:42:12.287 --rc genhtml_function_coverage=1 00:42:12.287 --rc genhtml_legend=1 00:42:12.287 --rc geninfo_all_blocks=1 00:42:12.287 --rc geninfo_unexecuted_blocks=1 00:42:12.287 00:42:12.287 ' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:12.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.287 --rc genhtml_branch_coverage=1 00:42:12.287 --rc genhtml_function_coverage=1 00:42:12.287 --rc genhtml_legend=1 00:42:12.287 --rc geninfo_all_blocks=1 00:42:12.287 --rc geninfo_unexecuted_blocks=1 00:42:12.287 00:42:12.287 ' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.287 11:36:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:12.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:12.288 11:36:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:14.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:14.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:14.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:14.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:14.190 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:14.191 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:14.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:14.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:42:14.451 00:42:14.451 --- 10.0.0.2 ping statistics --- 00:42:14.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.451 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:14.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:14.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:42:14.451 00:42:14.451 --- 10.0.0.1 ping statistics --- 00:42:14.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:14.451 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:14.451 11:36:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:15.836 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:15.836 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:15.836 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:16.775 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=475379 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 475379 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 475379 ']' 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:16.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:16.775 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.033 [2024-11-17 11:36:41.475177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:17.033 [2024-11-17 11:36:41.475261] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.033 [2024-11-17 11:36:41.545857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:17.033 [2024-11-17 11:36:41.590792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:17.033 [2024-11-17 11:36:41.590856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:17.033 [2024-11-17 11:36:41.590883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:17.033 [2024-11-17 11:36:41.590895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:17.033 [2024-11-17 11:36:41.590904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:17.033 [2024-11-17 11:36:41.592282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.033 [2024-11-17 11:36:41.592392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:17.033 [2024-11-17 11:36:41.592522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:17.033 [2024-11-17 11:36:41.592531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:17.291 11:36:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:17.291 ************************************ 00:42:17.291 START TEST spdk_target_abort 00:42:17.291 ************************************ 00:42:17.291 11:36:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:17.291 11:36:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:17.291 11:36:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:17.291 11:36:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.292 11:36:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:20.572 spdk_targetn1 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:20.572 [2024-11-17 11:36:44.607285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:20.572 [2024-11-17 11:36:44.647646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:20.572 11:36:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:23.853 Initializing NVMe Controllers 00:42:23.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:23.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:23.853 Initialization complete. Launching workers. 00:42:23.853 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12192, failed: 0 00:42:23.853 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1193, failed to submit 10999 00:42:23.853 success 718, unsuccessful 475, failed 0 00:42:23.853 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:23.853 11:36:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:27.130 Initializing NVMe Controllers 00:42:27.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:27.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:27.130 Initialization complete. Launching workers. 00:42:27.130 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8567, failed: 0 00:42:27.130 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1246, failed to submit 7321 00:42:27.130 success 322, unsuccessful 924, failed 0 00:42:27.130 11:36:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:27.130 11:36:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:30.414 Initializing NVMe Controllers 00:42:30.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:30.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:30.414 Initialization complete. Launching workers. 00:42:30.414 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30493, failed: 0 00:42:30.414 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2728, failed to submit 27765 00:42:30.414 success 482, unsuccessful 2246, failed 0 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:30.414 11:36:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 475379 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 475379 ']' 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 475379 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475379 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475379' 00:42:31.347 killing process with pid 475379 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 475379 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 475379 00:42:31.347 00:42:31.347 real 0m14.174s 00:42:31.347 user 0m53.895s 00:42:31.347 sys 0m2.467s 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:31.347 ************************************ 00:42:31.347 END TEST spdk_target_abort 00:42:31.347 ************************************ 00:42:31.347 11:36:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:31.347 11:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:31.347 11:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:31.347 11:36:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:31.347 ************************************ 00:42:31.347 START TEST kernel_target_abort 00:42:31.347 ************************************ 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:31.347 11:36:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:31.606 11:36:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:31.606 11:36:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:32.542 Waiting for block devices as requested 00:42:32.542 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:32.801 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:32.801 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:33.059 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:33.059 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:33.059 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:33.059 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:33.319 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:33.319 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:33.319 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:33.578 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:33.578 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:33.578 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:33.578 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:33.836 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:33.836 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:33.836 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:34.095 No valid GPT data, bailing 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:34.095 00:42:34.095 Discovery Log Number of Records 2, Generation counter 2 00:42:34.095 =====Discovery Log Entry 0====== 00:42:34.095 trtype: tcp 00:42:34.095 adrfam: ipv4 00:42:34.095 subtype: current discovery subsystem 00:42:34.095 treq: not specified, sq flow control disable supported 00:42:34.095 portid: 1 00:42:34.095 trsvcid: 4420 00:42:34.095 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:34.095 traddr: 10.0.0.1 00:42:34.095 eflags: none 00:42:34.095 sectype: none 00:42:34.095 =====Discovery Log Entry 1====== 00:42:34.095 trtype: tcp 00:42:34.095 adrfam: ipv4 00:42:34.095 subtype: nvme subsystem 00:42:34.095 treq: not specified, sq flow control disable supported 00:42:34.095 portid: 1 00:42:34.095 trsvcid: 4420 00:42:34.095 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:34.095 traddr: 10.0.0.1 00:42:34.095 eflags: none 00:42:34.095 sectype: none 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:34.095 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:34.096 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:34.096 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:34.096 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:34.096 11:36:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:37.377 Initializing NVMe Controllers 00:42:37.377 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:37.377 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:37.377 Initialization complete. Launching workers. 00:42:37.377 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57161, failed: 0 00:42:37.377 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 57161, failed to submit 0 00:42:37.377 success 0, unsuccessful 57161, failed 0 00:42:37.377 11:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:37.377 11:37:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:40.658 Initializing NVMe Controllers 00:42:40.658 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:40.658 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:40.658 Initialization complete. Launching workers. 00:42:40.658 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101111, failed: 0 00:42:40.658 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25482, failed to submit 75629 00:42:40.658 success 0, unsuccessful 25482, failed 0 00:42:40.658 11:37:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:40.658 11:37:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:43.940 Initializing NVMe Controllers 00:42:43.940 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:43.940 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:43.940 Initialization complete. Launching workers. 00:42:43.940 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97988, failed: 0 00:42:43.940 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24518, failed to submit 73470 00:42:43.940 success 0, unsuccessful 24518, failed 0 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:43.940 11:37:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:44.877 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:44.877 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:44.877 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:45.816 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:45.816 00:42:45.816 real 0m14.369s 00:42:45.816 user 0m6.611s 00:42:45.816 sys 0m3.222s 00:42:45.816 11:37:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.816 11:37:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:45.816 ************************************ 00:42:45.816 END TEST kernel_target_abort 00:42:45.816 ************************************ 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:45.816 rmmod nvme_tcp 00:42:45.816 rmmod nvme_fabrics 00:42:45.816 rmmod nvme_keyring 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 475379 ']' 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 475379 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 475379 ']' 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 475379 00:42:45.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (475379) - No such process 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 475379 is not found' 00:42:45.816 Process with pid 475379 is not found 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:45.816 11:37:10 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:47.192 Waiting for block devices as requested 00:42:47.192 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:47.192 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:47.451 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:47.451 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:47.451 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:47.712 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:47.712 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:47.712 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:47.712 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:47.971 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:47.971 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:47.971 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:48.245 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:48.246 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:48.246 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:48.246 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:48.246 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:48.507 11:37:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:50.413 11:37:15 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:50.413 00:42:50.413 real 0m38.371s 00:42:50.413 user 1m2.829s 00:42:50.413 sys 0m9.351s 00:42:50.413 11:37:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:50.413 11:37:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:50.413 ************************************ 00:42:50.413 END TEST nvmf_abort_qd_sizes 00:42:50.413 ************************************ 00:42:50.673 11:37:15 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:50.673 11:37:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:50.673 11:37:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:50.673 11:37:15 -- common/autotest_common.sh@10 -- # set +x 00:42:50.673 ************************************ 00:42:50.673 START TEST keyring_file 00:42:50.673 ************************************ 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:50.673 * Looking for test storage... 00:42:50.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:50.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.673 --rc genhtml_branch_coverage=1 00:42:50.673 --rc genhtml_function_coverage=1 00:42:50.673 --rc genhtml_legend=1 00:42:50.673 --rc geninfo_all_blocks=1 00:42:50.673 --rc geninfo_unexecuted_blocks=1 00:42:50.673 00:42:50.673 ' 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:50.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.673 --rc genhtml_branch_coverage=1 00:42:50.673 --rc genhtml_function_coverage=1 00:42:50.673 --rc genhtml_legend=1 00:42:50.673 --rc geninfo_all_blocks=1 00:42:50.673 --rc geninfo_unexecuted_blocks=1 00:42:50.673 00:42:50.673 ' 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:50.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.673 --rc genhtml_branch_coverage=1 00:42:50.673 --rc genhtml_function_coverage=1 00:42:50.673 --rc genhtml_legend=1 00:42:50.673 --rc geninfo_all_blocks=1 00:42:50.673 --rc geninfo_unexecuted_blocks=1 00:42:50.673 00:42:50.673 ' 00:42:50.673 11:37:15 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:50.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:50.673 --rc genhtml_branch_coverage=1 00:42:50.673 --rc genhtml_function_coverage=1 00:42:50.673 --rc genhtml_legend=1 00:42:50.673 --rc geninfo_all_blocks=1 00:42:50.673 --rc geninfo_unexecuted_blocks=1 00:42:50.673 00:42:50.673 ' 00:42:50.673 11:37:15 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:50.673 11:37:15 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:50.673 11:37:15 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:50.673 11:37:15 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:50.674 11:37:15 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.674 11:37:15 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.674 11:37:15 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.674 11:37:15 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:50.674 11:37:15 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:50.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tvf7spzjSr 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tvf7spzjSr 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tvf7spzjSr 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.tvf7spzjSr 00:42:50.674 11:37:15 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6QmVcvvdIX 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:50.674 11:37:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6QmVcvvdIX 00:42:50.674 11:37:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6QmVcvvdIX 00:42:50.932 11:37:15 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.6QmVcvvdIX 00:42:50.932 11:37:15 keyring_file -- keyring/file.sh@30 -- # tgtpid=481145 00:42:50.932 11:37:15 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:50.932 11:37:15 keyring_file -- keyring/file.sh@32 -- # waitforlisten 481145 00:42:50.932 11:37:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 481145 ']' 00:42:50.932 11:37:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:50.932 11:37:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:50.932 11:37:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:50.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:50.932 11:37:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:50.932 11:37:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:50.932 [2024-11-17 11:37:15.381790] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:50.932 [2024-11-17 11:37:15.381911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481145 ] 00:42:50.932 [2024-11-17 11:37:15.447361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.932 [2024-11-17 11:37:15.493386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:51.191 11:37:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:51.191 [2024-11-17 11:37:15.732882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:51.191 null0 00:42:51.191 [2024-11-17 11:37:15.764937] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:51.191 [2024-11-17 11:37:15.765463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.191 11:37:15 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:51.191 [2024-11-17 11:37:15.792974] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:51.191 request: 00:42:51.191 { 00:42:51.191 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:51.191 "secure_channel": false, 00:42:51.191 "listen_address": { 00:42:51.191 "trtype": "tcp", 00:42:51.191 "traddr": "127.0.0.1", 00:42:51.191 "trsvcid": "4420" 00:42:51.191 }, 00:42:51.191 "method": "nvmf_subsystem_add_listener", 00:42:51.191 "req_id": 1 00:42:51.191 } 00:42:51.191 Got JSON-RPC error response 00:42:51.191 response: 00:42:51.191 { 00:42:51.191 "code": -32602, 00:42:51.191 "message": "Invalid parameters" 00:42:51.191 } 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:51.191 11:37:15 keyring_file -- keyring/file.sh@47 -- # bperfpid=481157 00:42:51.191 11:37:15 keyring_file -- keyring/file.sh@49 -- # waitforlisten 481157 /var/tmp/bperf.sock 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 481157 ']' 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:51.191 11:37:15 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:51.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:51.191 11:37:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:51.191 [2024-11-17 11:37:15.842427] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:42:51.191 [2024-11-17 11:37:15.842495] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481157 ] 00:42:51.450 [2024-11-17 11:37:15.908234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:51.450 [2024-11-17 11:37:15.959179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:51.450 11:37:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:51.450 11:37:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:42:51.450 11:37:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:51.450 11:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:51.709 11:37:16 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6QmVcvvdIX 00:42:51.709 11:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6QmVcvvdIX 00:42:51.967 11:37:16 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:51.967 11:37:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:51.967 11:37:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.967 11:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.967 11:37:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:52.534 11:37:16 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.tvf7spzjSr == \/\t\m\p\/\t\m\p\.\t\v\f\7\s\p\z\j\S\r ]] 00:42:52.534 11:37:16 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:52.534 11:37:16 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:52.534 11:37:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.534 11:37:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:52.534 11:37:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.534 11:37:17 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.6QmVcvvdIX == \/\t\m\p\/\t\m\p\.\6\Q\m\V\c\v\v\d\I\X ]] 00:42:52.534 11:37:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:52.534 11:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:52.534 11:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:52.534 11:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:52.534 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:52.534 11:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:53.100 11:37:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:53.100 11:37:17 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:53.100 11:37:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:53.100 11:37:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.100 11:37:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.100 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.100 11:37:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:53.100 11:37:17 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:53.100 11:37:17 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.100 11:37:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:53.358 [2024-11-17 11:37:17.976327] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:53.616 nvme0n1 00:42:53.616 11:37:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:53.616 11:37:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:53.616 11:37:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.616 11:37:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.616 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.616 11:37:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:53.905 11:37:18 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:53.905 11:37:18 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:53.905 11:37:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:53.905 11:37:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:53.905 11:37:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:53.905 11:37:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.905 11:37:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:54.206 11:37:18 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:54.206 11:37:18 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:54.206 Running I/O for 1 seconds... 00:42:55.178 10455.00 IOPS, 40.84 MiB/s 00:42:55.178 Latency(us) 00:42:55.178 [2024-11-17T10:37:19.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:55.178 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:55.178 nvme0n1 : 1.01 10505.33 41.04 0.00 0.00 12143.73 4344.79 18155.90 00:42:55.178 [2024-11-17T10:37:19.836Z] =================================================================================================================== 00:42:55.178 [2024-11-17T10:37:19.836Z] Total : 10505.33 41.04 0.00 0.00 12143.73 4344.79 18155.90 00:42:55.178 { 00:42:55.178 "results": [ 00:42:55.178 { 00:42:55.178 "job": "nvme0n1", 00:42:55.178 "core_mask": "0x2", 00:42:55.178 "workload": "randrw", 00:42:55.178 "percentage": 50, 00:42:55.178 "status": "finished", 00:42:55.178 "queue_depth": 128, 00:42:55.178 "io_size": 4096, 00:42:55.178 "runtime": 1.007489, 00:42:55.178 "iops": 10505.325616458344, 00:42:55.178 "mibps": 41.036428189290405, 00:42:55.178 "io_failed": 0, 00:42:55.178 "io_timeout": 0, 00:42:55.178 "avg_latency_us": 12143.732415105958, 00:42:55.178 "min_latency_us": 4344.794074074074, 00:42:55.178 "max_latency_us": 18155.89925925926 00:42:55.178 } 00:42:55.178 ], 00:42:55.178 "core_count": 1 00:42:55.178 } 00:42:55.178 11:37:19 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:55.178 11:37:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:55.436 11:37:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:55.436 11:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:55.436 11:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.436 11:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.436 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.436 11:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:55.695 11:37:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:55.695 11:37:20 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:55.695 11:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:55.695 11:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:55.695 11:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:55.695 11:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:55.695 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:55.953 11:37:20 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:55.953 11:37:20 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:55.953 11:37:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:55.953 11:37:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:55.953 11:37:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:55.953 11:37:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.954 11:37:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:55.954 11:37:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:55.954 11:37:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:55.954 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:56.214 [2024-11-17 11:37:20.862410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:56.214 [2024-11-17 11:37:20.862995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7ab60 (107): Transport endpoint is not connected 00:42:56.214 [2024-11-17 11:37:20.863986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa7ab60 (9): Bad file descriptor 00:42:56.214 [2024-11-17 11:37:20.864985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:42:56.214 [2024-11-17 11:37:20.865004] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:56.214 [2024-11-17 11:37:20.865032] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:56.214 [2024-11-17 11:37:20.865058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:42:56.214 request: 00:42:56.214 { 00:42:56.214 "name": "nvme0", 00:42:56.214 "trtype": "tcp", 00:42:56.214 "traddr": "127.0.0.1", 00:42:56.214 "adrfam": "ipv4", 00:42:56.214 "trsvcid": "4420", 00:42:56.214 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:56.214 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:56.214 "prchk_reftag": false, 00:42:56.214 "prchk_guard": false, 00:42:56.214 "hdgst": false, 00:42:56.214 "ddgst": false, 00:42:56.214 "psk": "key1", 00:42:56.214 "allow_unrecognized_csi": false, 00:42:56.214 "method": "bdev_nvme_attach_controller", 00:42:56.214 "req_id": 1 00:42:56.214 } 00:42:56.214 Got JSON-RPC error response 00:42:56.214 response: 00:42:56.214 { 00:42:56.214 "code": -5, 00:42:56.214 "message": "Input/output error" 00:42:56.214 } 00:42:56.473 11:37:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:56.473 11:37:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:56.473 11:37:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:56.473 11:37:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:56.473 11:37:20 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:56.473 11:37:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:56.473 11:37:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:56.473 11:37:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:56.473 11:37:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:56.473 11:37:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.732 11:37:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:56.732 11:37:21 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:56.732 11:37:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:56.732 11:37:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:56.732 11:37:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:56.732 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:56.732 11:37:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:56.990 11:37:21 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:56.990 11:37:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:56.990 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:57.248 11:37:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:57.248 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:57.506 11:37:21 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:57.506 11:37:21 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:57.506 11:37:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:57.764 11:37:22 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:57.764 11:37:22 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.tvf7spzjSr 00:42:57.764 11:37:22 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:57.764 11:37:22 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:57.764 11:37:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:58.022 [2024-11-17 11:37:22.487961] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tvf7spzjSr': 0100660 00:42:58.022 [2024-11-17 11:37:22.488006] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:58.022 request: 00:42:58.022 { 00:42:58.022 "name": "key0", 00:42:58.022 "path": "/tmp/tmp.tvf7spzjSr", 00:42:58.022 "method": "keyring_file_add_key", 00:42:58.022 "req_id": 1 00:42:58.022 } 00:42:58.022 Got JSON-RPC error response 00:42:58.022 response: 00:42:58.022 { 00:42:58.022 "code": -1, 00:42:58.022 "message": "Operation not permitted" 00:42:58.022 } 00:42:58.022 11:37:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:58.022 11:37:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:58.022 11:37:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:58.022 11:37:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:58.022 11:37:22 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.tvf7spzjSr 00:42:58.022 11:37:22 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:58.022 11:37:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.tvf7spzjSr 00:42:58.281 11:37:22 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.tvf7spzjSr 00:42:58.281 11:37:22 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:58.281 11:37:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:58.281 11:37:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:58.281 11:37:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:58.281 11:37:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:58.281 11:37:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:58.540 11:37:23 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:58.540 11:37:23 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:42:58.540 11:37:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.540 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:58.798 [2024-11-17 11:37:23.306188] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.tvf7spzjSr': No such file or directory 00:42:58.798 [2024-11-17 11:37:23.306224] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:58.798 [2024-11-17 11:37:23.306262] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:58.798 [2024-11-17 11:37:23.306276] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:58.798 [2024-11-17 11:37:23.306288] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:58.798 [2024-11-17 11:37:23.306299] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:58.798 request: 00:42:58.798 { 00:42:58.798 "name": "nvme0", 00:42:58.798 "trtype": "tcp", 00:42:58.798 "traddr": "127.0.0.1", 00:42:58.798 "adrfam": "ipv4", 00:42:58.798 "trsvcid": "4420", 00:42:58.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:58.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:58.798 "prchk_reftag": false, 00:42:58.798 "prchk_guard": false, 00:42:58.798 "hdgst": false, 00:42:58.798 "ddgst": false, 00:42:58.798 "psk": "key0", 00:42:58.798 "allow_unrecognized_csi": false, 00:42:58.798 "method": "bdev_nvme_attach_controller", 00:42:58.798 "req_id": 1 00:42:58.798 } 00:42:58.798 Got JSON-RPC error response 00:42:58.798 response: 00:42:58.798 { 00:42:58.798 "code": -19, 00:42:58.798 "message": "No such device" 00:42:58.798 } 00:42:58.798 11:37:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:42:58.798 11:37:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:42:58.798 11:37:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:42:58.798 11:37:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:42:58.798 11:37:23 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:58.798 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:59.057 11:37:23 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TWYyRjs8jF 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:59.057 11:37:23 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:59.057 11:37:23 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:42:59.057 11:37:23 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:42:59.057 11:37:23 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:42:59.057 11:37:23 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:42:59.057 11:37:23 keyring_file -- nvmf/common.sh@733 -- # python - 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TWYyRjs8jF 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TWYyRjs8jF 00:42:59.057 11:37:23 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.TWYyRjs8jF 00:42:59.057 11:37:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TWYyRjs8jF 00:42:59.057 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TWYyRjs8jF 00:42:59.316 11:37:23 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.316 11:37:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:59.882 nvme0n1 00:42:59.882 11:37:24 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:59.882 11:37:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:59.882 11:37:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:59.882 11:37:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:59.882 11:37:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:59.882 11:37:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:59.882 11:37:24 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:59.882 11:37:24 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:59.882 11:37:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:00.449 11:37:24 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:00.449 11:37:24 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:00.449 11:37:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.449 11:37:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.449 11:37:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:00.449 11:37:25 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:00.449 11:37:25 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:00.449 11:37:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:00.449 11:37:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:00.449 11:37:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:00.449 11:37:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:00.449 11:37:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:01.018 11:37:25 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:01.018 11:37:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:01.018 11:37:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:01.018 11:37:25 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:01.018 11:37:25 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:01.018 11:37:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:01.276 11:37:25 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:01.276 11:37:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TWYyRjs8jF 00:43:01.276 11:37:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TWYyRjs8jF 00:43:01.534 11:37:26 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.6QmVcvvdIX 00:43:01.534 11:37:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.6QmVcvvdIX 00:43:02.101 11:37:26 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:02.101 11:37:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:02.358 nvme0n1 00:43:02.358 11:37:26 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:02.358 11:37:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:02.617 11:37:27 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:02.617 "subsystems": [ 00:43:02.617 { 00:43:02.617 "subsystem": "keyring", 00:43:02.617 "config": [ 00:43:02.617 { 00:43:02.617 "method": "keyring_file_add_key", 00:43:02.617 "params": { 00:43:02.617 "name": "key0", 00:43:02.617 "path": "/tmp/tmp.TWYyRjs8jF" 00:43:02.617 } 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "method": "keyring_file_add_key", 00:43:02.617 "params": { 00:43:02.617 "name": "key1", 00:43:02.617 "path": "/tmp/tmp.6QmVcvvdIX" 00:43:02.617 } 00:43:02.617 } 00:43:02.617 ] 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "subsystem": "iobuf", 00:43:02.617 "config": [ 00:43:02.617 { 00:43:02.617 "method": "iobuf_set_options", 00:43:02.617 "params": { 00:43:02.617 "small_pool_count": 8192, 00:43:02.617 "large_pool_count": 1024, 00:43:02.617 "small_bufsize": 8192, 00:43:02.617 "large_bufsize": 135168, 00:43:02.617 "enable_numa": false 00:43:02.617 } 00:43:02.617 } 00:43:02.617 ] 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "subsystem": "sock", 00:43:02.617 "config": [ 00:43:02.617 { 00:43:02.617 "method": "sock_set_default_impl", 00:43:02.617 "params": { 00:43:02.617 "impl_name": "posix" 00:43:02.617 } 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "method": "sock_impl_set_options", 00:43:02.617 "params": { 00:43:02.617 "impl_name": "ssl", 00:43:02.617 "recv_buf_size": 4096, 00:43:02.617 "send_buf_size": 4096, 00:43:02.617 "enable_recv_pipe": true, 00:43:02.617 "enable_quickack": false, 00:43:02.617 "enable_placement_id": 0, 00:43:02.617 "enable_zerocopy_send_server": true, 00:43:02.617 "enable_zerocopy_send_client": false, 00:43:02.617 "zerocopy_threshold": 0, 00:43:02.617 "tls_version": 0, 00:43:02.617 "enable_ktls": false 00:43:02.617 } 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "method": "sock_impl_set_options", 00:43:02.617 "params": { 00:43:02.617 "impl_name": "posix", 00:43:02.617 "recv_buf_size": 2097152, 00:43:02.617 "send_buf_size": 2097152, 00:43:02.617 "enable_recv_pipe": true, 00:43:02.617 "enable_quickack": false, 00:43:02.617 "enable_placement_id": 0, 00:43:02.617 "enable_zerocopy_send_server": true, 00:43:02.617 "enable_zerocopy_send_client": false, 00:43:02.617 "zerocopy_threshold": 0, 00:43:02.617 "tls_version": 0, 00:43:02.617 "enable_ktls": false 00:43:02.617 } 00:43:02.617 } 00:43:02.617 ] 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "subsystem": "vmd", 00:43:02.617 "config": [] 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "subsystem": "accel", 00:43:02.617 "config": [ 00:43:02.617 { 00:43:02.617 "method": "accel_set_options", 00:43:02.617 "params": { 00:43:02.617 "small_cache_size": 128, 00:43:02.617 "large_cache_size": 16, 00:43:02.617 "task_count": 2048, 00:43:02.617 "sequence_count": 2048, 00:43:02.617 "buf_count": 2048 00:43:02.617 } 00:43:02.617 } 00:43:02.617 ] 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "subsystem": "bdev", 00:43:02.617 "config": [ 00:43:02.617 { 00:43:02.617 "method": "bdev_set_options", 00:43:02.617 "params": { 00:43:02.617 "bdev_io_pool_size": 65535, 00:43:02.617 "bdev_io_cache_size": 256, 00:43:02.617 "bdev_auto_examine": true, 00:43:02.617 "iobuf_small_cache_size": 128, 00:43:02.617 "iobuf_large_cache_size": 16 00:43:02.617 } 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "method": "bdev_raid_set_options", 00:43:02.617 "params": { 00:43:02.617 "process_window_size_kb": 1024, 00:43:02.617 "process_max_bandwidth_mb_sec": 0 00:43:02.617 } 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "method": "bdev_iscsi_set_options", 00:43:02.617 "params": { 00:43:02.617 "timeout_sec": 30 00:43:02.617 } 00:43:02.617 }, 00:43:02.617 { 00:43:02.617 "method": "bdev_nvme_set_options", 00:43:02.617 "params": { 00:43:02.617 "action_on_timeout": "none", 00:43:02.617 "timeout_us": 0, 00:43:02.617 "timeout_admin_us": 0, 00:43:02.617 "keep_alive_timeout_ms": 10000, 00:43:02.617 "arbitration_burst": 0, 00:43:02.617 "low_priority_weight": 0, 00:43:02.617 "medium_priority_weight": 0, 00:43:02.617 "high_priority_weight": 0, 00:43:02.617 "nvme_adminq_poll_period_us": 10000, 00:43:02.617 "nvme_ioq_poll_period_us": 0, 00:43:02.617 "io_queue_requests": 512, 00:43:02.617 "delay_cmd_submit": true, 00:43:02.617 "transport_retry_count": 4, 00:43:02.617 "bdev_retry_count": 3, 00:43:02.617 "transport_ack_timeout": 0, 00:43:02.617 "ctrlr_loss_timeout_sec": 0, 00:43:02.617 "reconnect_delay_sec": 0, 00:43:02.617 "fast_io_fail_timeout_sec": 0, 00:43:02.617 "disable_auto_failback": false, 00:43:02.617 "generate_uuids": false, 00:43:02.617 "transport_tos": 0, 00:43:02.617 "nvme_error_stat": false, 00:43:02.617 "rdma_srq_size": 0, 00:43:02.617 "io_path_stat": false, 00:43:02.618 "allow_accel_sequence": false, 00:43:02.618 "rdma_max_cq_size": 0, 00:43:02.618 "rdma_cm_event_timeout_ms": 0, 00:43:02.618 "dhchap_digests": [ 00:43:02.618 "sha256", 00:43:02.618 "sha384", 00:43:02.618 "sha512" 00:43:02.618 ], 00:43:02.618 "dhchap_dhgroups": [ 00:43:02.618 "null", 00:43:02.618 "ffdhe2048", 00:43:02.618 "ffdhe3072", 00:43:02.618 "ffdhe4096", 00:43:02.618 "ffdhe6144", 00:43:02.618 "ffdhe8192" 00:43:02.618 ] 00:43:02.618 } 00:43:02.618 }, 00:43:02.618 { 00:43:02.618 "method": "bdev_nvme_attach_controller", 00:43:02.618 "params": { 00:43:02.618 "name": "nvme0", 00:43:02.618 "trtype": "TCP", 00:43:02.618 "adrfam": "IPv4", 00:43:02.618 "traddr": "127.0.0.1", 00:43:02.618 "trsvcid": "4420", 00:43:02.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.618 "prchk_reftag": false, 00:43:02.618 "prchk_guard": false, 00:43:02.618 "ctrlr_loss_timeout_sec": 0, 00:43:02.618 "reconnect_delay_sec": 0, 00:43:02.618 "fast_io_fail_timeout_sec": 0, 00:43:02.618 "psk": "key0", 00:43:02.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.618 "hdgst": false, 00:43:02.618 "ddgst": false, 00:43:02.618 "multipath": "multipath" 00:43:02.618 } 00:43:02.618 }, 00:43:02.618 { 00:43:02.618 "method": "bdev_nvme_set_hotplug", 00:43:02.618 "params": { 00:43:02.618 "period_us": 100000, 00:43:02.618 "enable": false 00:43:02.618 } 00:43:02.618 }, 00:43:02.618 { 00:43:02.618 "method": "bdev_wait_for_examine" 00:43:02.618 } 00:43:02.618 ] 00:43:02.618 }, 00:43:02.618 { 00:43:02.618 "subsystem": "nbd", 00:43:02.618 "config": [] 00:43:02.618 } 00:43:02.618 ] 00:43:02.618 }' 00:43:02.618 11:37:27 keyring_file -- keyring/file.sh@115 -- # killprocess 481157 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 481157 ']' 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 481157 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481157 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481157' 00:43:02.618 killing process with pid 481157 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@973 -- # kill 481157 00:43:02.618 Received shutdown signal, test time was about 1.000000 seconds 00:43:02.618 00:43:02.618 Latency(us) 00:43:02.618 [2024-11-17T10:37:27.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:02.618 [2024-11-17T10:37:27.276Z] =================================================================================================================== 00:43:02.618 [2024-11-17T10:37:27.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:02.618 11:37:27 keyring_file -- common/autotest_common.sh@978 -- # wait 481157 00:43:02.878 11:37:27 keyring_file -- keyring/file.sh@118 -- # bperfpid=482629 00:43:02.878 11:37:27 keyring_file -- keyring/file.sh@120 -- # waitforlisten 482629 /var/tmp/bperf.sock 00:43:02.878 11:37:27 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482629 ']' 00:43:02.878 11:37:27 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:02.878 11:37:27 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:02.878 11:37:27 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:02.878 11:37:27 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:02.878 11:37:27 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:02.878 "subsystems": [ 00:43:02.878 { 00:43:02.878 "subsystem": "keyring", 00:43:02.878 "config": [ 00:43:02.878 { 00:43:02.878 "method": "keyring_file_add_key", 00:43:02.878 "params": { 00:43:02.878 "name": "key0", 00:43:02.878 "path": "/tmp/tmp.TWYyRjs8jF" 00:43:02.878 } 00:43:02.878 }, 00:43:02.878 { 00:43:02.878 "method": "keyring_file_add_key", 00:43:02.878 "params": { 00:43:02.878 "name": "key1", 00:43:02.878 "path": "/tmp/tmp.6QmVcvvdIX" 00:43:02.878 } 00:43:02.878 } 00:43:02.878 ] 00:43:02.878 }, 00:43:02.878 { 00:43:02.878 "subsystem": "iobuf", 00:43:02.878 "config": [ 00:43:02.878 { 00:43:02.878 "method": "iobuf_set_options", 00:43:02.878 "params": { 00:43:02.878 "small_pool_count": 8192, 00:43:02.878 "large_pool_count": 1024, 00:43:02.878 "small_bufsize": 8192, 00:43:02.878 "large_bufsize": 135168, 00:43:02.878 "enable_numa": false 00:43:02.878 } 00:43:02.878 } 00:43:02.878 ] 00:43:02.878 }, 00:43:02.878 { 00:43:02.878 "subsystem": "sock", 00:43:02.878 "config": [ 00:43:02.878 { 00:43:02.878 "method": "sock_set_default_impl", 00:43:02.878 "params": { 00:43:02.878 "impl_name": "posix" 00:43:02.878 } 00:43:02.878 }, 00:43:02.878 { 00:43:02.878 "method": "sock_impl_set_options", 00:43:02.878 "params": { 00:43:02.878 "impl_name": "ssl", 00:43:02.878 "recv_buf_size": 4096, 00:43:02.878 "send_buf_size": 4096, 00:43:02.878 "enable_recv_pipe": true, 00:43:02.878 "enable_quickack": false, 00:43:02.879 "enable_placement_id": 0, 00:43:02.879 "enable_zerocopy_send_server": true, 00:43:02.879 "enable_zerocopy_send_client": false, 00:43:02.879 "zerocopy_threshold": 0, 00:43:02.879 "tls_version": 0, 00:43:02.879 "enable_ktls": false 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "sock_impl_set_options", 00:43:02.879 "params": { 00:43:02.879 "impl_name": "posix", 00:43:02.879 "recv_buf_size": 2097152, 00:43:02.879 "send_buf_size": 2097152, 00:43:02.879 "enable_recv_pipe": true, 00:43:02.879 "enable_quickack": false, 00:43:02.879 "enable_placement_id": 0, 00:43:02.879 "enable_zerocopy_send_server": true, 00:43:02.879 "enable_zerocopy_send_client": false, 00:43:02.879 "zerocopy_threshold": 0, 00:43:02.879 "tls_version": 0, 00:43:02.879 "enable_ktls": false 00:43:02.879 } 00:43:02.879 } 00:43:02.879 ] 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "subsystem": "vmd", 00:43:02.879 "config": [] 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "subsystem": "accel", 00:43:02.879 "config": [ 00:43:02.879 { 00:43:02.879 "method": "accel_set_options", 00:43:02.879 "params": { 00:43:02.879 "small_cache_size": 128, 00:43:02.879 "large_cache_size": 16, 00:43:02.879 "task_count": 2048, 00:43:02.879 "sequence_count": 2048, 00:43:02.879 "buf_count": 2048 00:43:02.879 } 00:43:02.879 } 00:43:02.879 ] 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "subsystem": "bdev", 00:43:02.879 "config": [ 00:43:02.879 { 00:43:02.879 "method": "bdev_set_options", 00:43:02.879 "params": { 00:43:02.879 "bdev_io_pool_size": 65535, 00:43:02.879 "bdev_io_cache_size": 256, 00:43:02.879 "bdev_auto_examine": true, 00:43:02.879 "iobuf_small_cache_size": 128, 00:43:02.879 "iobuf_large_cache_size": 16 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "bdev_raid_set_options", 00:43:02.879 "params": { 00:43:02.879 "process_window_size_kb": 1024, 00:43:02.879 "process_max_bandwidth_mb_sec": 0 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "bdev_iscsi_set_options", 00:43:02.879 "params": { 00:43:02.879 "timeout_sec": 30 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "bdev_nvme_set_options", 00:43:02.879 "params": { 00:43:02.879 "action_on_timeout": "none", 00:43:02.879 "timeout_us": 0, 00:43:02.879 "timeout_admin_us": 0, 00:43:02.879 "keep_alive_timeout_ms": 10000, 00:43:02.879 "arbitration_burst": 0, 00:43:02.879 "low_priority_weight": 0, 00:43:02.879 "medium_priority_weight": 0, 00:43:02.879 "high_priority_weight": 0, 00:43:02.879 "nvme_adminq_poll_period_us": 10000, 00:43:02.879 "nvme_ioq_poll_period_us": 0, 00:43:02.879 "io_queue_requests": 512, 00:43:02.879 "delay_cmd_submit": true, 00:43:02.879 "transport_retry_count": 4, 00:43:02.879 "bdev_retry_count": 3, 00:43:02.879 "transport_ack_timeout": 0, 00:43:02.879 "ctrlr_loss_timeout_sec": 0, 00:43:02.879 "reconnect_delay_sec": 0, 00:43:02.879 "fast_io_fail_timeout_sec": 0, 00:43:02.879 "disable_auto_failback": false, 00:43:02.879 "generate_uuids": false, 00:43:02.879 "transport_tos": 0, 00:43:02.879 "nvme_error_stat": false, 00:43:02.879 "rdma_srq_size": 0, 00:43:02.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:02.879 "io_path_stat": false, 00:43:02.879 "allow_accel_sequence": false, 00:43:02.879 "rdma_max_cq_size": 0, 00:43:02.879 "rdma_cm_event_timeout_ms": 0, 00:43:02.879 "dhchap_digests": [ 00:43:02.879 "sha256", 00:43:02.879 "sha384", 00:43:02.879 "sha512" 00:43:02.879 ], 00:43:02.879 "dhchap_dhgroups": [ 00:43:02.879 "null", 00:43:02.879 "ffdhe2048", 00:43:02.879 "ffdhe3072", 00:43:02.879 "ffdhe4096", 00:43:02.879 "ffdhe6144", 00:43:02.879 "ffdhe8192" 00:43:02.879 ] 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "bdev_nvme_attach_controller", 00:43:02.879 "params": { 00:43:02.879 "name": "nvme0", 00:43:02.879 "trtype": "TCP", 00:43:02.879 "adrfam": "IPv4", 00:43:02.879 "traddr": "127.0.0.1", 00:43:02.879 "trsvcid": "4420", 00:43:02.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.879 "prchk_reftag": false, 00:43:02.879 "prchk_guard": false, 00:43:02.879 "ctrlr_loss_timeout_sec": 0, 00:43:02.879 "reconnect_delay_sec": 0, 00:43:02.879 "fast_io_fail_timeout_sec": 0, 00:43:02.879 "psk": "key0", 00:43:02.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.879 "hdgst": false, 00:43:02.879 "ddgst": false, 00:43:02.879 "multipath": "multipath" 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "bdev_nvme_set_hotplug", 00:43:02.879 "params": { 00:43:02.879 "period_us": 100000, 00:43:02.879 "enable": false 00:43:02.879 } 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "method": "bdev_wait_for_examine" 00:43:02.879 } 00:43:02.879 ] 00:43:02.879 }, 00:43:02.879 { 00:43:02.879 "subsystem": "nbd", 00:43:02.879 "config": [] 00:43:02.879 } 00:43:02.879 ] 00:43:02.879 }' 00:43:02.879 11:37:27 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:02.879 11:37:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:02.879 [2024-11-17 11:37:27.373288] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:43:02.879 [2024-11-17 11:37:27.373375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482629 ] 00:43:02.879 [2024-11-17 11:37:27.443627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.879 [2024-11-17 11:37:27.492063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:03.138 [2024-11-17 11:37:27.677980] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:03.138 11:37:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:03.138 11:37:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:03.138 11:37:27 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:03.138 11:37:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.138 11:37:27 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:03.396 11:37:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:03.396 11:37:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:03.655 11:37:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.655 11:37:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:03.655 11:37:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.655 11:37:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.655 11:37:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:03.913 11:37:28 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:03.913 11:37:28 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:03.913 11:37:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:03.913 11:37:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:03.913 11:37:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:03.913 11:37:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:03.913 11:37:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:04.171 11:37:28 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:04.171 11:37:28 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:04.171 11:37:28 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:04.171 11:37:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:04.430 11:37:28 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:04.430 11:37:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:04.430 11:37:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TWYyRjs8jF /tmp/tmp.6QmVcvvdIX 00:43:04.430 11:37:28 keyring_file -- keyring/file.sh@20 -- # killprocess 482629 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482629 ']' 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482629 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482629 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482629' 00:43:04.430 killing process with pid 482629 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@973 -- # kill 482629 00:43:04.430 Received shutdown signal, test time was about 1.000000 seconds 00:43:04.430 00:43:04.430 Latency(us) 00:43:04.430 [2024-11-17T10:37:29.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:04.430 [2024-11-17T10:37:29.088Z] =================================================================================================================== 00:43:04.430 [2024-11-17T10:37:29.088Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:04.430 11:37:28 keyring_file -- common/autotest_common.sh@978 -- # wait 482629 00:43:04.688 11:37:29 keyring_file -- keyring/file.sh@21 -- # killprocess 481145 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 481145 ']' 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@958 -- # kill -0 481145 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 481145 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 481145' 00:43:04.688 killing process with pid 481145 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@973 -- # kill 481145 00:43:04.688 11:37:29 keyring_file -- common/autotest_common.sh@978 -- # wait 481145 00:43:04.946 00:43:04.946 real 0m14.365s 00:43:04.946 user 0m36.849s 00:43:04.946 sys 0m3.219s 00:43:04.946 11:37:29 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:04.946 11:37:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:04.946 ************************************ 00:43:04.946 END TEST keyring_file 00:43:04.946 ************************************ 00:43:04.946 11:37:29 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:04.946 11:37:29 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:04.946 11:37:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:04.946 11:37:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:04.946 11:37:29 -- common/autotest_common.sh@10 -- # set +x 00:43:04.946 ************************************ 00:43:04.946 START TEST keyring_linux 00:43:04.946 ************************************ 00:43:04.946 11:37:29 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:04.946 Joined session keyring: 262257557 00:43:04.946 * Looking for test storage... 00:43:04.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:04.946 11:37:29 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:04.946 11:37:29 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:43:04.946 11:37:29 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:05.205 11:37:29 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:05.205 11:37:29 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:05.205 11:37:29 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.205 --rc genhtml_branch_coverage=1 00:43:05.205 --rc genhtml_function_coverage=1 00:43:05.205 --rc genhtml_legend=1 00:43:05.205 --rc geninfo_all_blocks=1 00:43:05.205 --rc geninfo_unexecuted_blocks=1 00:43:05.205 00:43:05.205 ' 00:43:05.205 11:37:29 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.205 --rc genhtml_branch_coverage=1 00:43:05.205 --rc genhtml_function_coverage=1 00:43:05.205 --rc genhtml_legend=1 00:43:05.205 --rc geninfo_all_blocks=1 00:43:05.205 --rc geninfo_unexecuted_blocks=1 00:43:05.205 00:43:05.205 ' 00:43:05.205 11:37:29 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.205 --rc genhtml_branch_coverage=1 00:43:05.205 --rc genhtml_function_coverage=1 00:43:05.205 --rc genhtml_legend=1 00:43:05.205 --rc geninfo_all_blocks=1 00:43:05.205 --rc geninfo_unexecuted_blocks=1 00:43:05.205 00:43:05.205 ' 00:43:05.205 11:37:29 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:05.205 --rc genhtml_branch_coverage=1 00:43:05.205 --rc genhtml_function_coverage=1 00:43:05.205 --rc genhtml_legend=1 00:43:05.205 --rc geninfo_all_blocks=1 00:43:05.205 --rc geninfo_unexecuted_blocks=1 00:43:05.205 00:43:05.205 ' 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:05.205 11:37:29 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:05.205 11:37:29 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.205 11:37:29 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.205 11:37:29 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.205 11:37:29 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:05.205 11:37:29 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:05.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:05.205 11:37:29 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:05.205 11:37:29 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:05.205 11:37:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:05.206 /tmp/:spdk-test:key0 00:43:05.206 11:37:29 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:05.206 11:37:29 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:05.206 11:37:29 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:05.206 11:37:29 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:05.206 11:37:29 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:05.206 11:37:29 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:05.206 11:37:29 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:05.206 11:37:29 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:05.206 /tmp/:spdk-test:key1 00:43:05.206 11:37:29 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=483095 00:43:05.206 11:37:29 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:05.206 11:37:29 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 483095 00:43:05.206 11:37:29 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 483095 ']' 00:43:05.206 11:37:29 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.206 11:37:29 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:05.206 11:37:29 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.206 11:37:29 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:05.206 11:37:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:05.206 [2024-11-17 11:37:29.811845] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:43:05.206 [2024-11-17 11:37:29.811938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483095 ] 00:43:05.466 [2024-11-17 11:37:29.876133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.466 [2024-11-17 11:37:29.918724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:05.725 11:37:30 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:05.725 [2024-11-17 11:37:30.177652] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:05.725 null0 00:43:05.725 [2024-11-17 11:37:30.209726] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:05.725 [2024-11-17 11:37:30.210307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.725 11:37:30 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:05.725 885877302 00:43:05.725 11:37:30 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:05.725 618064185 00:43:05.725 11:37:30 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=483111 00:43:05.725 11:37:30 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:05.725 11:37:30 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 483111 /var/tmp/bperf.sock 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 483111 ']' 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:05.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:05.725 11:37:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:05.725 [2024-11-17 11:37:30.276827] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization... 00:43:05.725 [2024-11-17 11:37:30.276904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483111 ] 00:43:05.725 [2024-11-17 11:37:30.345848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.984 [2024-11-17 11:37:30.392484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.984 11:37:30 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:05.984 11:37:30 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:05.984 11:37:30 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:05.984 11:37:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:06.242 11:37:30 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:06.243 11:37:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:06.501 11:37:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:06.501 11:37:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:06.760 [2024-11-17 11:37:31.391302] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:07.018 nvme0n1 00:43:07.018 11:37:31 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:07.018 11:37:31 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:07.018 11:37:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:07.018 11:37:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:07.018 11:37:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.018 11:37:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:07.277 11:37:31 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:07.277 11:37:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:07.277 11:37:31 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:07.277 11:37:31 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:07.277 11:37:31 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.277 11:37:31 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:07.277 11:37:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@25 -- # sn=885877302 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 885877302 == \8\8\5\8\7\7\3\0\2 ]] 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 885877302 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:07.537 11:37:32 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:07.537 Running I/O for 1 seconds... 00:43:08.916 9936.00 IOPS, 38.81 MiB/s 00:43:08.916 Latency(us) 00:43:08.916 [2024-11-17T10:37:33.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.916 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:08.916 nvme0n1 : 1.01 9933.61 38.80 0.00 0.00 12798.95 4466.16 15728.64 00:43:08.916 [2024-11-17T10:37:33.574Z] =================================================================================================================== 00:43:08.916 [2024-11-17T10:37:33.574Z] Total : 9933.61 38.80 0.00 0.00 12798.95 4466.16 15728.64 00:43:08.916 { 00:43:08.916 "results": [ 00:43:08.916 { 00:43:08.916 "job": "nvme0n1", 00:43:08.916 "core_mask": "0x2", 00:43:08.916 "workload": "randread", 00:43:08.916 "status": "finished", 00:43:08.916 "queue_depth": 128, 00:43:08.916 "io_size": 4096, 00:43:08.916 "runtime": 1.013126, 00:43:08.916 "iops": 9933.611416546411, 00:43:08.916 "mibps": 38.80316959588442, 00:43:08.916 "io_failed": 0, 00:43:08.916 "io_timeout": 0, 00:43:08.916 "avg_latency_us": 12798.948444915504, 00:43:08.916 "min_latency_us": 4466.157037037037, 00:43:08.916 "max_latency_us": 15728.64 00:43:08.916 } 00:43:08.916 ], 00:43:08.916 "core_count": 1 00:43:08.916 } 00:43:08.916 11:37:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:08.916 11:37:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:08.916 11:37:33 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:08.916 11:37:33 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:08.916 11:37:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:08.916 11:37:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:08.916 11:37:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:08.916 11:37:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.175 11:37:33 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:09.175 11:37:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:09.175 11:37:33 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:09.175 11:37:33 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.175 11:37:33 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:09.175 11:37:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:09.435 [2024-11-17 11:37:33.993904] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:09.435 [2024-11-17 11:37:33.994294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc8f0 (107): Transport endpoint is not connected 00:43:09.435 [2024-11-17 11:37:33.995286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc8f0 (9): Bad file descriptor 00:43:09.435 [2024-11-17 11:37:33.996285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:09.435 [2024-11-17 11:37:33.996304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:09.435 [2024-11-17 11:37:33.996333] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:09.435 [2024-11-17 11:37:33.996349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:09.435 request: 00:43:09.435 { 00:43:09.435 "name": "nvme0", 00:43:09.435 "trtype": "tcp", 00:43:09.435 "traddr": "127.0.0.1", 00:43:09.435 "adrfam": "ipv4", 00:43:09.435 "trsvcid": "4420", 00:43:09.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:09.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:09.435 "prchk_reftag": false, 00:43:09.435 "prchk_guard": false, 00:43:09.435 "hdgst": false, 00:43:09.435 "ddgst": false, 00:43:09.435 "psk": ":spdk-test:key1", 00:43:09.435 "allow_unrecognized_csi": false, 00:43:09.435 "method": "bdev_nvme_attach_controller", 00:43:09.435 "req_id": 1 00:43:09.435 } 00:43:09.435 Got JSON-RPC error response 00:43:09.435 response: 00:43:09.435 { 00:43:09.435 "code": -5, 00:43:09.435 "message": "Input/output error" 00:43:09.435 } 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@33 -- # sn=885877302 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 885877302 00:43:09.435 1 links removed 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@33 -- # sn=618064185 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 618064185 00:43:09.435 1 links removed 00:43:09.435 11:37:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 483111 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 483111 ']' 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 483111 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483111 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483111' 00:43:09.435 killing process with pid 483111 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 483111 00:43:09.435 Received shutdown signal, test time was about 1.000000 seconds 00:43:09.435 00:43:09.435 Latency(us) 00:43:09.435 [2024-11-17T10:37:34.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:09.435 [2024-11-17T10:37:34.093Z] =================================================================================================================== 00:43:09.435 [2024-11-17T10:37:34.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:09.435 11:37:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 483111 00:43:09.695 11:37:34 keyring_linux -- keyring/linux.sh@42 -- # killprocess 483095 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 483095 ']' 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 483095 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483095 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483095' 00:43:09.695 killing process with pid 483095 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@973 -- # kill 483095 00:43:09.695 11:37:34 keyring_linux -- common/autotest_common.sh@978 -- # wait 483095 00:43:10.263 00:43:10.263 real 0m5.168s 00:43:10.263 user 0m10.311s 00:43:10.263 sys 0m1.641s 00:43:10.263 11:37:34 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:10.263 11:37:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:10.263 ************************************ 00:43:10.263 END TEST keyring_linux 00:43:10.263 ************************************ 00:43:10.263 11:37:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:10.263 11:37:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:10.263 11:37:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:10.263 11:37:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:10.263 11:37:34 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:10.263 11:37:34 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:10.263 11:37:34 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:10.263 11:37:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:10.263 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:43:10.263 11:37:34 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:10.263 11:37:34 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:10.263 11:37:34 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:10.263 11:37:34 -- common/autotest_common.sh@10 -- # set +x 00:43:12.166 INFO: APP EXITING 00:43:12.166 INFO: killing all VMs 00:43:12.166 INFO: killing vhost app 00:43:12.166 INFO: EXIT DONE 00:43:13.105 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:13.105 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:13.105 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:13.105 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:13.105 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:13.105 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:13.105 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:13.105 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:13.105 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:13.105 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:13.363 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:13.363 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:13.363 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:13.363 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:13.363 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:13.363 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:13.363 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:14.742 Cleaning 00:43:14.743 Removing: /var/run/dpdk/spdk0/config 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:14.743 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:14.743 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:14.743 Removing: /var/run/dpdk/spdk1/config 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:14.743 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:14.743 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:14.743 Removing: /var/run/dpdk/spdk2/config 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:14.743 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:14.743 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:14.743 Removing: /var/run/dpdk/spdk3/config 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:14.743 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:14.743 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:14.743 Removing: /var/run/dpdk/spdk4/config 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:14.743 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:14.743 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:14.743 Removing: /dev/shm/bdev_svc_trace.1 00:43:14.743 Removing: /dev/shm/nvmf_trace.0 00:43:14.743 Removing: /dev/shm/spdk_tgt_trace.pid99173 00:43:14.743 Removing: /var/run/dpdk/spdk0 00:43:14.743 Removing: /var/run/dpdk/spdk1 00:43:14.743 Removing: /var/run/dpdk/spdk2 00:43:14.743 Removing: /var/run/dpdk/spdk3 00:43:14.743 Removing: /var/run/dpdk/spdk4 00:43:14.743 Removing: /var/run/dpdk/spdk_pid100302 00:43:14.743 Removing: /var/run/dpdk/spdk_pid100442 00:43:14.743 Removing: /var/run/dpdk/spdk_pid101160 00:43:14.743 Removing: /var/run/dpdk/spdk_pid101171 00:43:14.743 Removing: /var/run/dpdk/spdk_pid101428 00:43:14.743 Removing: /var/run/dpdk/spdk_pid102749 00:43:14.743 Removing: /var/run/dpdk/spdk_pid103670 00:43:14.743 Removing: /var/run/dpdk/spdk_pid103870 00:43:14.743 Removing: /var/run/dpdk/spdk_pid104131 00:43:14.743 Removing: /var/run/dpdk/spdk_pid104394 00:43:14.743 Removing: /var/run/dpdk/spdk_pid104592 00:43:14.743 Removing: /var/run/dpdk/spdk_pid104752 00:43:14.743 Removing: /var/run/dpdk/spdk_pid104912 00:43:14.743 Removing: /var/run/dpdk/spdk_pid105102 00:43:14.743 Removing: /var/run/dpdk/spdk_pid105417 00:43:14.743 Removing: /var/run/dpdk/spdk_pid107878 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108042 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108229 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108237 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108535 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108549 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108962 00:43:14.743 Removing: /var/run/dpdk/spdk_pid108976 00:43:14.743 Removing: /var/run/dpdk/spdk_pid109143 00:43:14.743 Removing: /var/run/dpdk/spdk_pid109263 00:43:14.743 Removing: /var/run/dpdk/spdk_pid109435 00:43:14.743 Removing: /var/run/dpdk/spdk_pid109447 00:43:14.743 Removing: /var/run/dpdk/spdk_pid109944 00:43:14.743 Removing: /var/run/dpdk/spdk_pid110100 00:43:14.743 Removing: /var/run/dpdk/spdk_pid110309 00:43:14.743 Removing: /var/run/dpdk/spdk_pid112450 00:43:14.743 Removing: /var/run/dpdk/spdk_pid115086 00:43:14.743 Removing: /var/run/dpdk/spdk_pid122087 00:43:14.743 Removing: /var/run/dpdk/spdk_pid122500 00:43:14.743 Removing: /var/run/dpdk/spdk_pid125137 00:43:14.743 Removing: /var/run/dpdk/spdk_pid125300 00:43:14.743 Removing: /var/run/dpdk/spdk_pid128440 00:43:14.743 Removing: /var/run/dpdk/spdk_pid132170 00:43:14.743 Removing: /var/run/dpdk/spdk_pid134354 00:43:14.743 Removing: /var/run/dpdk/spdk_pid140656 00:43:14.743 Removing: /var/run/dpdk/spdk_pid146001 00:43:14.743 Removing: /var/run/dpdk/spdk_pid147210 00:43:14.743 Removing: /var/run/dpdk/spdk_pid147880 00:43:14.743 Removing: /var/run/dpdk/spdk_pid158249 00:43:14.743 Removing: /var/run/dpdk/spdk_pid160537 00:43:14.743 Removing: /var/run/dpdk/spdk_pid215667 00:43:14.743 Removing: /var/run/dpdk/spdk_pid218966 00:43:14.743 Removing: /var/run/dpdk/spdk_pid223410 00:43:14.743 Removing: /var/run/dpdk/spdk_pid227677 00:43:14.743 Removing: /var/run/dpdk/spdk_pid227685 00:43:14.743 Removing: /var/run/dpdk/spdk_pid228345 00:43:14.743 Removing: /var/run/dpdk/spdk_pid228995 00:43:14.743 Removing: /var/run/dpdk/spdk_pid229552 00:43:14.743 Removing: /var/run/dpdk/spdk_pid230075 00:43:14.743 Removing: /var/run/dpdk/spdk_pid230086 00:43:14.743 Removing: /var/run/dpdk/spdk_pid230343 00:43:14.743 Removing: /var/run/dpdk/spdk_pid230478 00:43:14.743 Removing: /var/run/dpdk/spdk_pid230480 00:43:14.743 Removing: /var/run/dpdk/spdk_pid231139 00:43:14.743 Removing: /var/run/dpdk/spdk_pid231676 00:43:14.743 Removing: /var/run/dpdk/spdk_pid232333 00:43:14.743 Removing: /var/run/dpdk/spdk_pid232731 00:43:14.743 Removing: /var/run/dpdk/spdk_pid232738 00:43:14.743 Removing: /var/run/dpdk/spdk_pid232993 00:43:14.743 Removing: /var/run/dpdk/spdk_pid233896 00:43:14.743 Removing: /var/run/dpdk/spdk_pid234625 00:43:14.743 Removing: /var/run/dpdk/spdk_pid239944 00:43:14.743 Removing: /var/run/dpdk/spdk_pid268214 00:43:14.743 Removing: /var/run/dpdk/spdk_pid271149 00:43:14.743 Removing: /var/run/dpdk/spdk_pid272410 00:43:14.743 Removing: /var/run/dpdk/spdk_pid274283 00:43:14.743 Removing: /var/run/dpdk/spdk_pid274423 00:43:14.743 Removing: /var/run/dpdk/spdk_pid274564 00:43:14.743 Removing: /var/run/dpdk/spdk_pid274710 00:43:14.743 Removing: /var/run/dpdk/spdk_pid275146 00:43:14.743 Removing: /var/run/dpdk/spdk_pid276462 00:43:14.743 Removing: /var/run/dpdk/spdk_pid277313 00:43:14.743 Removing: /var/run/dpdk/spdk_pid277629 00:43:14.743 Removing: /var/run/dpdk/spdk_pid279242 00:43:14.743 Removing: /var/run/dpdk/spdk_pid279668 00:43:14.743 Removing: /var/run/dpdk/spdk_pid280115 00:43:14.743 Removing: /var/run/dpdk/spdk_pid282506 00:43:14.743 Removing: /var/run/dpdk/spdk_pid285906 00:43:14.743 Removing: /var/run/dpdk/spdk_pid285907 00:43:14.743 Removing: /var/run/dpdk/spdk_pid285908 00:43:14.743 Removing: /var/run/dpdk/spdk_pid288115 00:43:14.743 Removing: /var/run/dpdk/spdk_pid290213 00:43:14.743 Removing: /var/run/dpdk/spdk_pid293742 00:43:14.743 Removing: /var/run/dpdk/spdk_pid317069 00:43:14.743 Removing: /var/run/dpdk/spdk_pid319869 00:43:14.743 Removing: /var/run/dpdk/spdk_pid323772 00:43:14.743 Removing: /var/run/dpdk/spdk_pid324705 00:43:14.743 Removing: /var/run/dpdk/spdk_pid325723 00:43:14.743 Removing: /var/run/dpdk/spdk_pid326761 00:43:14.743 Removing: /var/run/dpdk/spdk_pid329518 00:43:14.743 Removing: /var/run/dpdk/spdk_pid331996 00:43:14.743 Removing: /var/run/dpdk/spdk_pid334640 00:43:14.743 Removing: /var/run/dpdk/spdk_pid339189 00:43:14.743 Removing: /var/run/dpdk/spdk_pid339196 00:43:14.743 Removing: /var/run/dpdk/spdk_pid342089 00:43:14.743 Removing: /var/run/dpdk/spdk_pid342229 00:43:14.743 Removing: /var/run/dpdk/spdk_pid342426 00:43:14.743 Removing: /var/run/dpdk/spdk_pid342747 00:43:14.743 Removing: /var/run/dpdk/spdk_pid342752 00:43:14.743 Removing: /var/run/dpdk/spdk_pid343913 00:43:14.743 Removing: /var/run/dpdk/spdk_pid345127 00:43:14.743 Removing: /var/run/dpdk/spdk_pid346302 00:43:14.743 Removing: /var/run/dpdk/spdk_pid347478 00:43:14.743 Removing: /var/run/dpdk/spdk_pid348658 00:43:14.743 Removing: /var/run/dpdk/spdk_pid349837 00:43:14.743 Removing: /var/run/dpdk/spdk_pid353663 00:43:14.743 Removing: /var/run/dpdk/spdk_pid353998 00:43:14.743 Removing: /var/run/dpdk/spdk_pid355390 00:43:14.743 Removing: /var/run/dpdk/spdk_pid356127 00:43:14.743 Removing: /var/run/dpdk/spdk_pid359846 00:43:14.743 Removing: /var/run/dpdk/spdk_pid361820 00:43:14.743 Removing: /var/run/dpdk/spdk_pid365855 00:43:14.743 Removing: /var/run/dpdk/spdk_pid369175 00:43:14.743 Removing: /var/run/dpdk/spdk_pid375677 00:43:14.743 Removing: /var/run/dpdk/spdk_pid380049 00:43:14.743 Removing: /var/run/dpdk/spdk_pid380143 00:43:14.743 Removing: /var/run/dpdk/spdk_pid392803 00:43:14.743 Removing: /var/run/dpdk/spdk_pid393271 00:43:14.743 Removing: /var/run/dpdk/spdk_pid393730 00:43:15.001 Removing: /var/run/dpdk/spdk_pid394139 00:43:15.001 Removing: /var/run/dpdk/spdk_pid394716 00:43:15.001 Removing: /var/run/dpdk/spdk_pid395130 00:43:15.001 Removing: /var/run/dpdk/spdk_pid395534 00:43:15.001 Removing: /var/run/dpdk/spdk_pid396005 00:43:15.001 Removing: /var/run/dpdk/spdk_pid398564 00:43:15.001 Removing: /var/run/dpdk/spdk_pid398819 00:43:15.001 Removing: /var/run/dpdk/spdk_pid403122 00:43:15.001 Removing: /var/run/dpdk/spdk_pid403173 00:43:15.001 Removing: /var/run/dpdk/spdk_pid406537 00:43:15.001 Removing: /var/run/dpdk/spdk_pid409148 00:43:15.001 Removing: /var/run/dpdk/spdk_pid416181 00:43:15.001 Removing: /var/run/dpdk/spdk_pid416585 00:43:15.001 Removing: /var/run/dpdk/spdk_pid419079 00:43:15.001 Removing: /var/run/dpdk/spdk_pid419334 00:43:15.001 Removing: /var/run/dpdk/spdk_pid421851 00:43:15.001 Removing: /var/run/dpdk/spdk_pid425543 00:43:15.001 Removing: /var/run/dpdk/spdk_pid427697 00:43:15.001 Removing: /var/run/dpdk/spdk_pid434575 00:43:15.001 Removing: /var/run/dpdk/spdk_pid439775 00:43:15.001 Removing: /var/run/dpdk/spdk_pid440999 00:43:15.001 Removing: /var/run/dpdk/spdk_pid441677 00:43:15.001 Removing: /var/run/dpdk/spdk_pid451788 00:43:15.001 Removing: /var/run/dpdk/spdk_pid454037 00:43:15.001 Removing: /var/run/dpdk/spdk_pid456045 00:43:15.001 Removing: /var/run/dpdk/spdk_pid461079 00:43:15.001 Removing: /var/run/dpdk/spdk_pid461088 00:43:15.001 Removing: /var/run/dpdk/spdk_pid463974 00:43:15.001 Removing: /var/run/dpdk/spdk_pid465375 00:43:15.001 Removing: /var/run/dpdk/spdk_pid466782 00:43:15.001 Removing: /var/run/dpdk/spdk_pid467632 00:43:15.001 Removing: /var/run/dpdk/spdk_pid469544 00:43:15.001 Removing: /var/run/dpdk/spdk_pid470408 00:43:15.001 Removing: /var/run/dpdk/spdk_pid475709 00:43:15.001 Removing: /var/run/dpdk/spdk_pid476073 00:43:15.001 Removing: /var/run/dpdk/spdk_pid476461 00:43:15.001 Removing: /var/run/dpdk/spdk_pid478018 00:43:15.001 Removing: /var/run/dpdk/spdk_pid478410 00:43:15.001 Removing: /var/run/dpdk/spdk_pid478689 00:43:15.001 Removing: /var/run/dpdk/spdk_pid481145 00:43:15.001 Removing: /var/run/dpdk/spdk_pid481157 00:43:15.001 Removing: /var/run/dpdk/spdk_pid482629 00:43:15.001 Removing: /var/run/dpdk/spdk_pid483095 00:43:15.001 Removing: /var/run/dpdk/spdk_pid483111 00:43:15.001 Removing: /var/run/dpdk/spdk_pid97609 00:43:15.001 Removing: /var/run/dpdk/spdk_pid98346 00:43:15.001 Removing: /var/run/dpdk/spdk_pid99173 00:43:15.001 Removing: /var/run/dpdk/spdk_pid99620 00:43:15.001 Clean 00:43:15.001 11:37:39 -- common/autotest_common.sh@1453 -- # return 0 00:43:15.001 11:37:39 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:15.001 11:37:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:15.001 11:37:39 -- common/autotest_common.sh@10 -- # set +x 00:43:15.001 11:37:39 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:15.001 11:37:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:15.001 11:37:39 -- common/autotest_common.sh@10 -- # set +x 00:43:15.001 11:37:39 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:15.001 11:37:39 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:15.001 11:37:39 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:15.001 11:37:39 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:15.001 11:37:39 -- spdk/autotest.sh@398 -- # hostname 00:43:15.001 11:37:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:15.259 geninfo: WARNING: invalid characters removed from testname! 00:43:47.323 11:38:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:50.621 11:38:14 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:53.163 11:38:17 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:56.459 11:38:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:59.002 11:38:23 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:02.300 11:38:26 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:05.596 11:38:29 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:05.596 11:38:29 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:05.596 11:38:29 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:05.596 11:38:29 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:05.596 11:38:29 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:05.596 11:38:29 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:05.596 + [[ -n 5428 ]] 00:44:05.596 + sudo kill 5428 00:44:05.606 [Pipeline] } 00:44:05.619 [Pipeline] // stage 00:44:05.623 [Pipeline] } 00:44:05.635 [Pipeline] // timeout 00:44:05.639 [Pipeline] } 00:44:05.651 [Pipeline] // catchError 00:44:05.655 [Pipeline] } 00:44:05.667 [Pipeline] // wrap 00:44:05.672 [Pipeline] } 00:44:05.683 [Pipeline] // catchError 00:44:05.691 [Pipeline] stage 00:44:05.692 [Pipeline] { (Epilogue) 00:44:05.703 [Pipeline] catchError 00:44:05.704 [Pipeline] { 00:44:05.715 [Pipeline] echo 00:44:05.716 Cleanup processes 00:44:05.721 [Pipeline] sh 00:44:06.007 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:06.007 495368 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:06.021 [Pipeline] sh 00:44:06.308 ++ grep -v 'sudo pgrep' 00:44:06.308 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:06.308 ++ awk '{print $1}' 00:44:06.308 + sudo kill -9 00:44:06.308 + true 00:44:06.321 [Pipeline] sh 00:44:06.606 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:18.821 [Pipeline] sh 00:44:19.110 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:19.110 Artifacts sizes are good 00:44:19.127 [Pipeline] archiveArtifacts 00:44:19.135 Archiving artifacts 00:44:19.734 [Pipeline] sh 00:44:20.019 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:20.034 [Pipeline] cleanWs 00:44:20.044 [WS-CLEANUP] Deleting project workspace... 00:44:20.044 [WS-CLEANUP] Deferred wipeout is used... 00:44:20.051 [WS-CLEANUP] done 00:44:20.053 [Pipeline] } 00:44:20.071 [Pipeline] // catchError 00:44:20.083 [Pipeline] sh 00:44:20.368 + logger -p user.info -t JENKINS-CI 00:44:20.376 [Pipeline] } 00:44:20.389 [Pipeline] // stage 00:44:20.395 [Pipeline] } 00:44:20.409 [Pipeline] // node 00:44:20.414 [Pipeline] End of Pipeline 00:44:20.455 Finished: SUCCESS